I0714 23:21:02.163585 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0714 23:21:02.163759 7 e2e.go:129] Starting e2e run "aa1e7c69-c4e1-47bf-9144-010d97ffa572" on Ginkgo node 1 {"msg":"Test Suite starting","total":294,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1594768861 - Will randomize all specs Will run 294 of 5214 specs Jul 14 23:21:02.217: INFO: >>> kubeConfig: /root/.kube/config Jul 14 23:21:02.219: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 14 23:21:02.238: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 14 23:21:02.276: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 14 23:21:02.276: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 14 23:21:02.277: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 14 23:21:02.286: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 14 23:21:02.286: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 14 23:21:02.286: INFO: e2e test version: v1.20.0-alpha.0.4+2d327ac4558d78 Jul 14 23:21:02.288: INFO: kube-apiserver version: v1.18.4 Jul 14 23:21:02.288: INFO: >>> kubeConfig: /root/.kube/config Jul 14 23:21:02.292: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:21:02.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch Jul 14 23:21:02.346: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 14 23:21:02.368: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206525 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:21:02.368: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206526 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:21:02.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206527 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 14 23:21:12.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206563 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:21:12.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206564 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:21:12.449: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1810 /api/v1/namespaces/watch-1810/configmaps/e2e-watch-test-label-changed f6cb4a5a-cad9-43ff-bc70-955c6ca098aa 1206565 0 2020-07-14 23:21:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-07-14 23:21:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:21:12.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1810" for this suite. • [SLOW TEST:10.184 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":294,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:21:12.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:21:12.528: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 14 23:21:18.450: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 14 23:22:10.485: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 14 23:22:12.489: INFO: Creating deployment "test-rollover-deployment" Jul 14 23:22:12.517: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 14 23:22:14.543: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 14 23:22:14.548: INFO: Ensure that both replica sets have 1 created replica Jul 14 23:22:14.551: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 14 23:22:14.795: INFO: Updating deployment test-rollover-deployment Jul 14 23:22:14.795: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 14 23:22:16.802: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 14 23:22:16.808: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 14 23:22:16.814: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:16.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365736, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:18.841: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:18.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365736, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:20.863: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:20.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365740, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:22.822: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:22.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365740, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:24.821: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:24.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365740, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:26.820: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:26.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365740, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:28.819: INFO: all replica sets need to contain the pod-template-hash label Jul 14 23:22:28.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365740, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365732, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7586b49c69\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:30.825: INFO: Jul 14 23:22:30.825: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 14 23:22:31.039: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-289 /apis/apps/v1/namespaces/deployment-289/deployments/test-rollover-deployment 6e9ef050-4e15-497e-be0c-45bf13f7e164 1206860 2 2020-07-14 23:22:12 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-14 23:22:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-14 23:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002292538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-14 23:22:12 +0000 UTC,LastTransitionTime:2020-07-14 23:22:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7586b49c69" has successfully progressed.,LastUpdateTime:2020-07-14 23:22:30 +0000 UTC,LastTransitionTime:2020-07-14 23:22:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 14 23:22:31.043: INFO: New ReplicaSet "test-rollover-deployment-7586b49c69" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7586b49c69 deployment-289 /apis/apps/v1/namespaces/deployment-289/replicasets/test-rollover-deployment-7586b49c69 cd102951-aa86-4785-b332-d3147fae5c20 1206849 2 2020-07-14 23:22:14 +0000 UTC map[name:rollover-pod pod-template-hash:7586b49c69] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6e9ef050-4e15-497e-be0c-45bf13f7e164 0xc002292b57 0xc002292b58}] [] [{kube-controller-manager Update apps/v1 2020-07-14 23:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e9ef050-4e15-497e-be0c-45bf13f7e164\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7586b49c69,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7586b49c69] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002292be8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:22:31.043: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 14 23:22:31.043: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-289 /apis/apps/v1/namespaces/deployment-289/replicasets/test-rollover-controller 7b9f2f3c-30d4-4617-90dd-2d8eb4563c73 1206859 2 2020-07-14 23:21:12 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6e9ef050-4e15-497e-be0c-45bf13f7e164 0xc002292947 0xc002292948}] [] [{e2e.test Update apps/v1 2020-07-14 23:21:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-14 23:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e9ef050-4e15-497e-be0c-45bf13f7e164\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022929e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:22:31.043: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-289 /apis/apps/v1/namespaces/deployment-289/replicasets/test-rollover-deployment-5686c4cfd5 783073a3-9902-42fa-b040-02a41fae030f 1206787 2 2020-07-14 23:22:12 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6e9ef050-4e15-497e-be0c-45bf13f7e164 0xc002292a57 0xc002292a58}] [] [{kube-controller-manager Update apps/v1 2020-07-14 23:22:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e9ef050-4e15-497e-be0c-45bf13f7e164\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002292ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:22:31.197: INFO: Pod "test-rollover-deployment-7586b49c69-z7d7b" is available: &Pod{ObjectMeta:{test-rollover-deployment-7586b49c69-z7d7b test-rollover-deployment-7586b49c69- deployment-289 /api/v1/namespaces/deployment-289/pods/test-rollover-deployment-7586b49c69-z7d7b 9dd1a14b-a2ca-42ef-bc47-194eff662b29 1206814 0 2020-07-14 23:22:16 +0000 UTC map[name:rollover-pod pod-template-hash:7586b49c69] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7586b49c69 cd102951-aa86-4785-b332-d3147fae5c20 0xc001c89b07 0xc001c89b08}] [] [{kube-controller-manager Update v1 2020-07-14 23:22:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd102951-aa86-4785-b332-d3147fae5c20\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-14 23:22:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vqjg8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vqjg8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vqjg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:22:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:22:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:22:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.20,StartTime:2020-07-14 23:22:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-14 23:22:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://3c925b89c0865512dee5fdd10fd348dbd533ef413bd0cd7f08b6d4c16fe0582a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:22:31.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-289" for this suite. • [SLOW TEST:78.817 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":294,"completed":2,"skipped":139,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:22:31.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 14 23:22:32.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18" in namespace "projected-6690" to be "Succeeded or Failed" Jul 14 23:22:32.579: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 185.966409ms Jul 14 23:22:34.913: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519378681s Jul 14 23:22:36.919: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525054676s Jul 14 23:22:39.148: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754552998s Jul 14 23:22:42.526: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132627043s Jul 14 23:22:44.922: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 12.528109693s Jul 14 23:22:46.924: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Pending", Reason="", readiness=false. Elapsed: 14.530930229s Jul 14 23:22:48.928: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.534613766s STEP: Saw pod success Jul 14 23:22:48.928: INFO: Pod "downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18" satisfied condition "Succeeded or Failed" Jul 14 23:22:48.931: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18 container client-container: STEP: delete the pod Jul 14 23:22:49.278: INFO: Waiting for pod downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18 to disappear Jul 14 23:22:49.363: INFO: Pod downwardapi-volume-f6386de8-97c6-4436-abd8-5804e3c39a18 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:22:49.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6690" for this suite. • [SLOW TEST:18.137 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":3,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:22:49.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:22:49.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3397" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":294,"completed":4,"skipped":172,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:22:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 14 23:22:49.642: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jul 14 23:22:50.256: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 14 23:22:56.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:22:58.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:00.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:02.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:04.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:06.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:08.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:11.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:12.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:14.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:16.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:18.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:20.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:24.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:27.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:30.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:30.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:32.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:34.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:36.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:39.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:41.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:42.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:44.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:46.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:48.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:50.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:52.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:54.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:23:57.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730365770, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:24:00.686: INFO: Waited 797.87863ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:24:03.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4987" for this suite. • [SLOW TEST:73.997 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":294,"completed":5,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:24:03.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:24:04.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2700' Jul 14 23:24:09.544: INFO: stderr: "" Jul 14 23:24:09.544: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jul 14 23:24:09.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2700' Jul 14 23:24:09.883: INFO: stderr: "" Jul 14 23:24:09.883: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 14 23:24:10.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:10.887: INFO: Found 0 / 1 Jul 14 23:24:11.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:11.887: INFO: Found 0 / 1 Jul 14 23:24:12.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:12.888: INFO: Found 0 / 1 Jul 14 23:24:14.059: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:14.059: INFO: Found 0 / 1 Jul 14 23:24:14.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:14.887: INFO: Found 0 / 1 Jul 14 23:24:16.444: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:16.444: INFO: Found 0 / 1 Jul 14 23:24:17.165: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:17.165: INFO: Found 0 / 1 Jul 14 23:24:17.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:17.887: INFO: Found 0 / 1 Jul 14 23:24:18.887: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:18.887: INFO: Found 1 / 1 Jul 14 23:24:18.887: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 14 23:24:18.890: INFO: Selector matched 1 pods for map[app:agnhost] Jul 14 23:24:18.890: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 14 23:24:18.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config describe pod agnhost-primary-2ff2n --namespace=kubectl-2700' Jul 14 23:24:19.001: INFO: stderr: "" Jul 14 23:24:19.001: INFO: stdout: "Name: agnhost-primary-2ff2n\nNamespace: kubectl-2700\nPriority: 0\nNode: latest-worker/172.18.0.14\nStart Time: Tue, 14 Jul 2020 23:24:09 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.179\nIPs:\n IP: 10.244.2.179\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://5201ee7d7483f03b9749b6113c9751d5381a9ecb5d8a693f3752659c32c50ef1\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 14 Jul 2020 23:24:17 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rgrrn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rgrrn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rgrrn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-2700/agnhost-primary-2ff2n to latest-worker\n Normal Pulled 8s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker Started container agnhost-primary\n" Jul 14 23:24:19.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-2700' Jul 14 23:24:19.120: INFO: stderr: "" Jul 14 23:24:19.121: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2700\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: agnhost-primary-2ff2n\n" Jul 14 23:24:19.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-2700' Jul 14 23:24:19.230: INFO: stderr: "" Jul 14 23:24:19.230: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2700\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.107.113.176\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.179:6379\nSession Affinity: None\nEvents: \n" Jul 14 23:24:19.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jul 14 23:24:19.358: INFO: stderr: "" Jul 14 23:24:19.358: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:29:34 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 14 Jul 2020 23:24:13 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 14 Jul 2020 23:24:09 +0000 Fri, 10 Jul 2020 10:29:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 14 Jul 2020 23:24:09 +0000 Fri, 10 Jul 2020 10:29:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 14 Jul 2020 23:24:09 +0000 Fri, 10 Jul 2020 10:29:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 14 Jul 2020 23:24:09 +0000 Fri, 10 Jul 2020 10:30:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 08e3d1af94e64c419f74d6afa70f0d43\n System UUID: b2b9a347-3d8a-409e-9c43-3d2f455385e1\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-34-g49b0743c\n Kubelet Version: v1.18.4\n Kube-Proxy Version: v1.18.4\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-lkg9r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d12h\n kube-system coredns-66bff467f8-xqch9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d12h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d12h\n kube-system kindnet-6gzv5 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d12h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d12h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d12h\n kube-system kube-proxy-bvnbl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d12h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d12h\n local-path-storage local-path-provisioner-67795f75bd-wdgcp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d12h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jul 14 23:24:19.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config describe namespace kubectl-2700' Jul 14 23:24:19.451: INFO: stderr: "" Jul 14 23:24:19.451: INFO: stdout: "Name: kubectl-2700\nLabels: e2e-framework=kubectl\n e2e-run=aa1e7c69-c4e1-47bf-9144-010d97ffa572\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:24:19.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2700" for this suite. • [SLOW TEST:15.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1100 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":294,"completed":6,"skipped":198,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:24:19.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2651 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 14 23:24:19.547: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 14 23:24:19.701: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:22.542: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:23.708: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:25.731: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:27.983: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:30.583: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:31.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:34.492: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:36.165: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:37.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:40.659: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:41.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:44.607: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:46.716: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:48.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:50.354: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:52.736: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:53.703: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:56.365: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:24:58.430: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:25:00.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:25:01.707: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:25:05.054: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:25:06.037: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 14 23:25:07.749: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:09.755: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:11.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:13.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:15.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:17.709: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:19.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:21.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 14 23:25:23.704: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 14 23:25:23.708: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 14 23:25:29.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.180:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2651 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:25:29.776: INFO: >>> kubeConfig: /root/.kube/config I0714 23:25:29.813238 7 log.go:181] (0xc000f4e370) (0xc002610c80) Create stream I0714 23:25:29.813263 7 log.go:181] (0xc000f4e370) (0xc002610c80) Stream added, broadcasting: 1 I0714 23:25:29.816646 7 log.go:181] (0xc000f4e370) Reply frame received for 1 I0714 23:25:29.816672 7 log.go:181] (0xc000f4e370) (0xc002597540) Create stream I0714 23:25:29.816686 7 log.go:181] (0xc000f4e370) (0xc002597540) Stream added, broadcasting: 3 I0714 23:25:29.817583 7 log.go:181] (0xc000f4e370) Reply frame received for 3 I0714 23:25:29.817614 7 log.go:181] (0xc000f4e370) (0xc002a34820) Create stream I0714 23:25:29.817627 7 log.go:181] (0xc000f4e370) (0xc002a34820) Stream added, broadcasting: 5 I0714 23:25:29.818417 7 log.go:181] (0xc000f4e370) Reply frame received for 5 I0714 23:25:29.974148 7 log.go:181] (0xc000f4e370) Data frame received for 3 I0714 23:25:29.974190 7 log.go:181] (0xc002597540) (3) Data frame handling I0714 23:25:29.974264 7 log.go:181] (0xc002597540) (3) Data frame sent I0714 23:25:29.974532 7 log.go:181] (0xc000f4e370) Data frame received for 3 I0714 23:25:29.974573 7 log.go:181] (0xc000f4e370) Data frame received for 5 I0714 23:25:29.974647 7 log.go:181] (0xc002a34820) (5) Data frame handling I0714 23:25:29.974681 7 log.go:181] (0xc002597540) (3) Data frame handling I0714 23:25:29.976450 7 log.go:181] (0xc000f4e370) Data frame received for 1 I0714 23:25:29.976484 7 log.go:181] (0xc002610c80) (1) Data frame handling I0714 23:25:29.976571 7 log.go:181] (0xc002610c80) (1) Data frame sent I0714 23:25:29.976599 7 log.go:181] (0xc000f4e370) (0xc002610c80) Stream removed, broadcasting: 1 I0714 23:25:29.976625 7 log.go:181] (0xc000f4e370) Go away received I0714 23:25:29.977041 7 log.go:181] (0xc000f4e370) (0xc002610c80) Stream removed, broadcasting: 1 I0714 23:25:29.977067 7 log.go:181] (0xc000f4e370) (0xc002597540) Stream removed, broadcasting: 3 I0714 23:25:29.977080 7 log.go:181] (0xc000f4e370) (0xc002a34820) Stream removed, broadcasting: 5 Jul 14 23:25:29.977: INFO: Found all expected endpoints: [netserver-0] Jul 14 23:25:30.007: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.24:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2651 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:25:30.007: INFO: >>> kubeConfig: /root/.kube/config I0714 23:25:30.039741 7 log.go:181] (0xc00278d550) (0xc00157a640) Create stream I0714 23:25:30.039761 7 log.go:181] (0xc00278d550) (0xc00157a640) Stream added, broadcasting: 1 I0714 23:25:30.042037 7 log.go:181] (0xc00278d550) Reply frame received for 1 I0714 23:25:30.042065 7 log.go:181] (0xc00278d550) (0xc00157a820) Create stream I0714 23:25:30.042075 7 log.go:181] (0xc00278d550) (0xc00157a820) Stream added, broadcasting: 3 I0714 23:25:30.042723 7 log.go:181] (0xc00278d550) Reply frame received for 3 I0714 23:25:30.042738 7 log.go:181] (0xc00278d550) (0xc00157a8c0) Create stream I0714 23:25:30.042748 7 log.go:181] (0xc00278d550) (0xc00157a8c0) Stream added, broadcasting: 5 I0714 23:25:30.043299 7 log.go:181] (0xc00278d550) Reply frame received for 5 I0714 23:25:30.095749 7 log.go:181] (0xc00278d550) Data frame received for 5 I0714 23:25:30.095774 7 log.go:181] (0xc00157a8c0) (5) Data frame handling I0714 23:25:30.095791 7 log.go:181] (0xc00278d550) Data frame received for 3 I0714 23:25:30.095799 7 log.go:181] (0xc00157a820) (3) Data frame handling I0714 23:25:30.095813 7 log.go:181] (0xc00157a820) (3) Data frame sent I0714 23:25:30.095864 7 log.go:181] (0xc00278d550) Data frame received for 3 I0714 23:25:30.095876 7 log.go:181] (0xc00157a820) (3) Data frame handling I0714 23:25:30.097391 7 log.go:181] (0xc00278d550) Data frame received for 1 I0714 23:25:30.097404 7 log.go:181] (0xc00157a640) (1) Data frame handling I0714 23:25:30.097416 7 log.go:181] (0xc00157a640) (1) Data frame sent I0714 23:25:30.097513 7 log.go:181] (0xc00278d550) (0xc00157a640) Stream removed, broadcasting: 1 I0714 23:25:30.097615 7 log.go:181] (0xc00278d550) (0xc00157a640) Stream removed, broadcasting: 1 I0714 23:25:30.097644 7 log.go:181] (0xc00278d550) (0xc00157a820) Stream removed, broadcasting: 3 I0714 23:25:30.097668 7 log.go:181] (0xc00278d550) (0xc00157a8c0) Stream removed, broadcasting: 5 Jul 14 23:25:30.097: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:25:30.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0714 23:25:30.097753 7 log.go:181] (0xc00278d550) Go away received STEP: Destroying namespace "pod-network-test-2651" for this suite. • [SLOW TEST:70.646 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":7,"skipped":198,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:25:30.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:25:30.210: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed" in namespace "security-context-test-5196" to be "Succeeded or Failed" Jul 14 23:25:30.222: INFO: Pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed": Phase="Pending", Reason="", readiness=false. Elapsed: 11.152521ms Jul 14 23:25:32.450: INFO: Pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239875327s Jul 14 23:25:34.480: INFO: Pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269918865s Jul 14 23:25:34.480: INFO: Pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed" satisfied condition "Succeeded or Failed" Jul 14 23:25:34.496: INFO: Got logs for pod "busybox-privileged-false-212946b0-a6ca-462d-9960-49e3289eaaed": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:25:34.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5196" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":8,"skipped":202,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:25:34.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:25:35.950: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:25:43.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3515" for this suite. • [SLOW TEST:8.568 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":294,"completed":9,"skipped":203,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:25:43.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 14 23:25:43.200: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 14 23:25:43.214: INFO: Waiting for terminating namespaces to be deleted... Jul 14 23:25:43.216: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 14 23:25:43.220: INFO: rally-f6d9be4e-8wopf4nf-ftlff from c-rally-f6d9be4e-1cf55o07 started at 2020-07-14 23:25:33 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.220: INFO: Container rally-f6d9be4e-8wopf4nf ready: true, restart count 0 Jul 14 23:25:43.220: INFO: rally-f6d9be4e-8wopf4nf-mgknq from c-rally-f6d9be4e-1cf55o07 started at 2020-07-14 23:25:33 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.220: INFO: Container rally-f6d9be4e-8wopf4nf ready: true, restart count 0 Jul 14 23:25:43.220: INFO: kindnet-qt4jk from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.220: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:25:43.220: INFO: kube-proxy-xb9q4 from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.220: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:25:43.220: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 14 23:25:43.224: INFO: rally-f6d9be4e-8wopf4nf-z7bz7 from c-rally-f6d9be4e-1cf55o07 started at 2020-07-14 23:25:42 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.224: INFO: Container rally-f6d9be4e-8wopf4nf ready: false, restart count 0 Jul 14 23:25:43.224: INFO: kindnet-gkkxx from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.224: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:25:43.224: INFO: kube-proxy-s596l from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 14 23:25:43.224: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a67a6140-bc91-42b1-954d-3a1ee822ea3f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a67a6140-bc91-42b1-954d-3a1ee822ea3f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a67a6140-bc91-42b1-954d-3a1ee822ea3f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:26:03.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2587" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.368 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":294,"completed":10,"skipped":216,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:26:03.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:26:03.569: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 14 23:26:08.573: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 14 23:26:08.573: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 14 23:26:08.709: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5917 /apis/apps/v1/namespaces/deployment-5917/deployments/test-cleanup-deployment a52846be-c7ef-4b24-9184-81fe1a717a31 1208021 1 2020-07-14 23:26:08 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-07-14 23:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003591478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 14 23:26:08.783: INFO: New ReplicaSet "test-cleanup-deployment-75b9cff456" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-75b9cff456 deployment-5917 /apis/apps/v1/namespaces/deployment-5917/replicasets/test-cleanup-deployment-75b9cff456 604905e6-1b00-4ea5-9f84-394795a7268f 1208030 1 2020-07-14 23:26:08 +0000 UTC map[name:cleanup-pod pod-template-hash:75b9cff456] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a52846be-c7ef-4b24-9184-81fe1a717a31 0xc0026e06a7 0xc0026e06a8}] [] [{kube-controller-manager Update apps/v1 2020-07-14 23:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a52846be-c7ef-4b24-9184-81fe1a717a31\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 75b9cff456,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:75b9cff456] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026e0738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:26:08.783: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 14 23:26:08.783: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5917 /apis/apps/v1/namespaces/deployment-5917/replicasets/test-cleanup-controller 35e01e11-6e91-4860-a427-5f7498257102 1208023 1 2020-07-14 23:26:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a52846be-c7ef-4b24-9184-81fe1a717a31 0xc0026e0597 0xc0026e0598}] [] [{e2e.test Update apps/v1 2020-07-14 23:26:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-14 23:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"a52846be-c7ef-4b24-9184-81fe1a717a31\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0026e0638 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:26:08.824: INFO: Pod "test-cleanup-controller-jhzw2" is available: &Pod{ObjectMeta:{test-cleanup-controller-jhzw2 test-cleanup-controller- deployment-5917 /api/v1/namespaces/deployment-5917/pods/test-cleanup-controller-jhzw2 40d2ab51-04ad-465c-a076-d40d5e2c9b41 1208010 0 2020-07-14 23:26:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 35e01e11-6e91-4860-a427-5f7498257102 0xc00359182f 0xc003591840}] [] [{kube-controller-manager Update v1 2020-07-14 23:26:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e01e11-6e91-4860-a427-5f7498257102\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-14 23:26:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m5smb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m5smb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m5smb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:26:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:26:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:26:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:26:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.31,StartTime:2020-07-14 23:26:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-14 23:26:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a4ad1d550a7d8f519c9e22227612a3125b4951c7b5c89a9f8c24b6a6e67c954,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 14 23:26:08.825: INFO: Pod "test-cleanup-deployment-75b9cff456-kgh27" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-75b9cff456-kgh27 test-cleanup-deployment-75b9cff456- deployment-5917 /api/v1/namespaces/deployment-5917/pods/test-cleanup-deployment-75b9cff456-kgh27 69f1e1b5-0575-4f42-93bc-52ac53e4f68f 1208029 0 2020-07-14 23:26:08 +0000 UTC map[name:cleanup-pod pod-template-hash:75b9cff456] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-75b9cff456 604905e6-1b00-4ea5-9f84-394795a7268f 0xc0035919f7 0xc0035919f8}] [] [{kube-controller-manager Update v1 2020-07-14 23:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"604905e6-1b00-4ea5-9f84-394795a7268f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m5smb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m5smb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m5smb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:26:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:26:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5917" for this suite. • [SLOW TEST:5.466 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":294,"completed":11,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:26:08.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:26:09.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 14 23:26:09.787: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:09Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:09Z]] name:name1 resourceVersion:1208066 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d8dac8db-cf09-435d-8886-4b90a5ad41af] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 14 23:26:20.317: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:19Z]] name:name2 resourceVersion:1208121 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fb9e4c80-b660-439b-908e-e512362e5799] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 14 23:26:30.332: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:30Z]] name:name1 resourceVersion:1208175 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d8dac8db-cf09-435d-8886-4b90a5ad41af] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 14 23:26:40.337: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:40Z]] name:name2 resourceVersion:1208207 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fb9e4c80-b660-439b-908e-e512362e5799] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 14 23:26:50.345: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:30Z]] name:name1 resourceVersion:1208271 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d8dac8db-cf09-435d-8886-4b90a5ad41af] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 14 23:27:00.352: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-14T23:26:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-14T23:26:40Z]] name:name2 resourceVersion:1208298 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:fb9e4c80-b660-439b-908e-e512362e5799] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:27:12.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4271" for this suite. • [SLOW TEST:63.789 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":294,"completed":12,"skipped":240,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:27:12.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-8521fcb4-92b8-45d6-a714-70e6e9c0fdfe STEP: Creating a pod to test consume secrets Jul 14 23:27:14.812: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23" in namespace "projected-6418" to be "Succeeded or Failed" Jul 14 23:27:15.997: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Pending", Reason="", readiness=false. Elapsed: 1.185146822s Jul 14 23:27:18.506: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.693590587s Jul 14 23:27:21.621: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809241613s Jul 14 23:27:23.624: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812004968s Jul 14 23:27:25.770: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Pending", Reason="", readiness=false. Elapsed: 10.957667114s Jul 14 23:27:27.775: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Running", Reason="", readiness=true. Elapsed: 12.963164429s Jul 14 23:27:29.778: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.966192481s STEP: Saw pod success Jul 14 23:27:29.778: INFO: Pod "pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23" satisfied condition "Succeeded or Failed" Jul 14 23:27:29.781: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23 container secret-volume-test: STEP: delete the pod Jul 14 23:27:29.896: INFO: Waiting for pod pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23 to disappear Jul 14 23:27:29.911: INFO: Pod pod-projected-secrets-0c5ca887-b0e8-42e8-bcbd-ca1286715d23 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:27:29.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6418" for this suite. • [SLOW TEST:17.150 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":13,"skipped":255,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:27:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:27:46.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8956" for this suite. • [SLOW TEST:16.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":294,"completed":14,"skipped":272,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:27:46.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-dafdb0e4-9441-4a84-861b-01918bad07de in namespace container-probe-6091 Jul 14 23:27:58.145: INFO: Started pod test-webserver-dafdb0e4-9441-4a84-861b-01918bad07de in namespace container-probe-6091 STEP: checking the pod's current state and verifying that restartCount is present Jul 14 23:27:58.147: INFO: Initial restart count of pod test-webserver-dafdb0e4-9441-4a84-861b-01918bad07de is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:31:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6091" for this suite. • [SLOW TEST:252.906 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":15,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:31:58.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8415/configmap-test-b5ecfe09-9cfd-4ab7-92b1-5dc3c51caf99 STEP: Creating a pod to test consume configMaps Jul 14 23:32:00.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e" in namespace "configmap-8415" to be "Succeeded or Failed" Jul 14 23:32:00.339: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.049184ms Jul 14 23:32:02.527: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21778842s Jul 14 23:32:04.564: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254265361s Jul 14 23:32:06.567: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e": Phase="Running", Reason="", readiness=true. Elapsed: 6.257702853s Jul 14 23:32:08.781: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.47154009s STEP: Saw pod success Jul 14 23:32:08.781: INFO: Pod "pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e" satisfied condition "Succeeded or Failed" Jul 14 23:32:09.049: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e container env-test: STEP: delete the pod Jul 14 23:32:09.225: INFO: Waiting for pod pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e to disappear Jul 14 23:32:09.250: INFO: Pod pod-configmaps-961dc52a-3fa9-4bc7-af00-ebf7f3071e5e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:32:09.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8415" for this suite. • [SLOW TEST:10.307 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":16,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:32:09.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0714 23:32:58.144686 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:33:01.243: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jul 14 23:33:01.243: INFO: Deleting pod "simpletest.rc-5hqvp" in namespace "gc-3689" Jul 14 23:33:01.881: INFO: Deleting pod "simpletest.rc-5kgh2" in namespace "gc-3689" Jul 14 23:33:02.816: INFO: Deleting pod "simpletest.rc-6xbq8" in namespace "gc-3689" Jul 14 23:33:06.532: INFO: Deleting pod "simpletest.rc-7zk8m" in namespace "gc-3689" Jul 14 23:33:09.990: INFO: Deleting pod "simpletest.rc-8vh8t" in namespace "gc-3689" Jul 14 23:33:11.421: INFO: Deleting pod "simpletest.rc-bkjdz" in namespace "gc-3689" Jul 14 23:33:12.660: INFO: Deleting pod "simpletest.rc-f2zbz" in namespace "gc-3689" Jul 14 23:33:14.784: INFO: Deleting pod "simpletest.rc-hsn5l" in namespace "gc-3689" Jul 14 23:33:15.231: INFO: Deleting pod "simpletest.rc-j4ddx" in namespace "gc-3689" Jul 14 23:33:15.837: INFO: Deleting pod "simpletest.rc-nq4vm" in namespace "gc-3689" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:33:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3689" for this suite. • [SLOW TEST:67.672 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":294,"completed":17,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:33:16.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0714 23:33:28.040305 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:33:30.084: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:33:30.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5711" for this suite. • [SLOW TEST:13.161 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":294,"completed":18,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:33:30.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 14 23:33:30.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e" in namespace "projected-1854" to be "Succeeded or Failed" Jul 14 23:33:30.220: INFO: Pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.614181ms Jul 14 23:33:32.223: INFO: Pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005943933s Jul 14 23:33:34.227: INFO: Pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009583176s Jul 14 23:33:36.230: INFO: Pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012835583s STEP: Saw pod success Jul 14 23:33:36.230: INFO: Pod "downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e" satisfied condition "Succeeded or Failed" Jul 14 23:33:36.232: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e container client-container: STEP: delete the pod Jul 14 23:33:36.263: INFO: Waiting for pod downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e to disappear Jul 14 23:33:36.306: INFO: Pod downwardapi-volume-445990c7-ca1f-43eb-b2ed-cf60a635b32e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:33:36.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1854" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":19,"skipped":407,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:33:36.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-hrbq STEP: Creating a pod to test atomic-volume-subpath Jul 14 23:33:36.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hrbq" in namespace "subpath-8527" to be "Succeeded or Failed" Jul 14 23:33:36.441: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626933ms Jul 14 23:33:38.445: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012686897s Jul 14 23:33:40.493: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 4.060679821s Jul 14 23:33:42.497: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 6.0642615s Jul 14 23:33:44.500: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 8.067450294s Jul 14 23:33:46.503: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 10.070500779s Jul 14 23:33:48.506: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 12.073041294s Jul 14 23:33:50.904: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 14.471382753s Jul 14 23:33:52.907: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 16.474797153s Jul 14 23:33:54.911: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 18.478173717s Jul 14 23:33:56.914: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 20.48103606s Jul 14 23:33:58.917: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 22.484150074s Jul 14 23:34:00.920: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Running", Reason="", readiness=true. Elapsed: 24.486975184s Jul 14 23:34:02.923: INFO: Pod "pod-subpath-test-projected-hrbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.490493958s STEP: Saw pod success Jul 14 23:34:02.923: INFO: Pod "pod-subpath-test-projected-hrbq" satisfied condition "Succeeded or Failed" Jul 14 23:34:02.926: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-hrbq container test-container-subpath-projected-hrbq: STEP: delete the pod Jul 14 23:34:04.279: INFO: Waiting for pod pod-subpath-test-projected-hrbq to disappear Jul 14 23:34:04.284: INFO: Pod pod-subpath-test-projected-hrbq no longer exists STEP: Deleting pod pod-subpath-test-projected-hrbq Jul 14 23:34:04.284: INFO: Deleting pod "pod-subpath-test-projected-hrbq" in namespace "subpath-8527" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:34:04.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8527" for this suite. • [SLOW TEST:28.436 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":294,"completed":20,"skipped":425,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:34:04.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 14 23:35:03.067: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:35:03.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9505" for this suite. • [SLOW TEST:58.778 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":294,"completed":21,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:35:03.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 14 23:35:12.357: INFO: Successfully updated pod "adopt-release-4wtf9" STEP: Checking that the Job readopts the Pod Jul 14 23:35:12.358: INFO: Waiting up to 15m0s for pod "adopt-release-4wtf9" in namespace "job-9581" to be "adopted" Jul 14 23:35:12.432: INFO: Pod "adopt-release-4wtf9": Phase="Running", Reason="", readiness=true. Elapsed: 74.314282ms Jul 14 23:35:14.436: INFO: Pod "adopt-release-4wtf9": Phase="Running", Reason="", readiness=true. Elapsed: 2.078084408s Jul 14 23:35:14.436: INFO: Pod "adopt-release-4wtf9" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 14 23:35:14.944: INFO: Successfully updated pod "adopt-release-4wtf9" STEP: Checking that the Job releases the Pod Jul 14 23:35:14.944: INFO: Waiting up to 15m0s for pod "adopt-release-4wtf9" in namespace "job-9581" to be "released" Jul 14 23:35:14.986: INFO: Pod "adopt-release-4wtf9": Phase="Running", Reason="", readiness=true. Elapsed: 41.967439ms Jul 14 23:35:17.373: INFO: Pod "adopt-release-4wtf9": Phase="Running", Reason="", readiness=true. Elapsed: 2.428835933s Jul 14 23:35:17.373: INFO: Pod "adopt-release-4wtf9" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:35:17.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9581" for this suite. • [SLOW TEST:13.956 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":294,"completed":22,"skipped":448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:35:17.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2898.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2898.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 14 23:35:23.913: INFO: DNS probes using dns-2898/dns-test-0b111662-2e74-4ec6-9883-2ffcfc562c4c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:35:23.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2898" for this suite. • [SLOW TEST:6.512 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":294,"completed":23,"skipped":471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:35:23.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jul 14 23:35:24.189: INFO: >>> kubeConfig: /root/.kube/config Jul 14 23:35:28.114: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:35:38.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9220" for this suite. • [SLOW TEST:14.965 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":294,"completed":24,"skipped":504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:35:38.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:37:13.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3487" for this suite. • [SLOW TEST:94.823 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":294,"completed":25,"skipped":593,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:37:13.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 14 23:37:13.936: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7430 /api/v1/namespaces/watch-7430/configmaps/e2e-watch-test-watch-closed aed09077-e5bf-4808-911a-e4388e1048f8 1210799 0 2020-07-14 23:37:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-14 23:37:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:37:13.937: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7430 /api/v1/namespaces/watch-7430/configmaps/e2e-watch-test-watch-closed aed09077-e5bf-4808-911a-e4388e1048f8 1210800 0 2020-07-14 23:37:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-14 23:37:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 14 23:37:14.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7430 /api/v1/namespaces/watch-7430/configmaps/e2e-watch-test-watch-closed aed09077-e5bf-4808-911a-e4388e1048f8 1210801 0 2020-07-14 23:37:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-14 23:37:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 14 23:37:14.007: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7430 /api/v1/namespaces/watch-7430/configmaps/e2e-watch-test-watch-closed aed09077-e5bf-4808-911a-e4388e1048f8 1210802 0 2020-07-14 23:37:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-14 23:37:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:37:14.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7430" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":294,"completed":26,"skipped":604,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:37:14.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 14 23:37:28.285: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.285: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.315076 7 log.go:181] (0xc005058840) (0xc0004a0640) Create stream I0714 23:37:28.315112 7 log.go:181] (0xc005058840) (0xc0004a0640) Stream added, broadcasting: 1 I0714 23:37:28.317119 7 log.go:181] (0xc005058840) Reply frame received for 1 I0714 23:37:28.317144 7 log.go:181] (0xc005058840) (0xc001270000) Create stream I0714 23:37:28.317152 7 log.go:181] (0xc005058840) (0xc001270000) Stream added, broadcasting: 3 I0714 23:37:28.317882 7 log.go:181] (0xc005058840) Reply frame received for 3 I0714 23:37:28.317913 7 log.go:181] (0xc005058840) (0xc001724140) Create stream I0714 23:37:28.317928 7 log.go:181] (0xc005058840) (0xc001724140) Stream added, broadcasting: 5 I0714 23:37:28.318549 7 log.go:181] (0xc005058840) Reply frame received for 5 I0714 23:37:28.387380 7 log.go:181] (0xc005058840) Data frame received for 3 I0714 23:37:28.387403 7 log.go:181] (0xc001270000) (3) Data frame handling I0714 23:37:28.387426 7 log.go:181] (0xc001270000) (3) Data frame sent I0714 23:37:28.387599 7 log.go:181] (0xc005058840) Data frame received for 3 I0714 23:37:28.387645 7 log.go:181] (0xc001270000) (3) Data frame handling I0714 23:37:28.387746 7 log.go:181] (0xc005058840) Data frame received for 5 I0714 23:37:28.387772 7 log.go:181] (0xc001724140) (5) Data frame handling I0714 23:37:28.390125 7 log.go:181] (0xc005058840) Data frame received for 1 I0714 23:37:28.390156 7 log.go:181] (0xc0004a0640) (1) Data frame handling I0714 23:37:28.390181 7 log.go:181] (0xc0004a0640) (1) Data frame sent I0714 23:37:28.390207 7 log.go:181] (0xc005058840) (0xc0004a0640) Stream removed, broadcasting: 1 I0714 23:37:28.390247 7 log.go:181] (0xc005058840) Go away received I0714 23:37:28.390405 7 log.go:181] (0xc005058840) (0xc0004a0640) Stream removed, broadcasting: 1 I0714 23:37:28.390438 7 log.go:181] (0xc005058840) (0xc001270000) Stream removed, broadcasting: 3 I0714 23:37:28.390471 7 log.go:181] (0xc005058840) (0xc001724140) Stream removed, broadcasting: 5 Jul 14 23:37:28.390: INFO: Exec stderr: "" Jul 14 23:37:28.390: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.390: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.421232 7 log.go:181] (0xc005058e70) (0xc0004a10e0) Create stream I0714 23:37:28.421263 7 log.go:181] (0xc005058e70) (0xc0004a10e0) Stream added, broadcasting: 1 I0714 23:37:28.424384 7 log.go:181] (0xc005058e70) Reply frame received for 1 I0714 23:37:28.424409 7 log.go:181] (0xc005058e70) (0xc001270140) Create stream I0714 23:37:28.424421 7 log.go:181] (0xc005058e70) (0xc001270140) Stream added, broadcasting: 3 I0714 23:37:28.425365 7 log.go:181] (0xc005058e70) Reply frame received for 3 I0714 23:37:28.425405 7 log.go:181] (0xc005058e70) (0xc001ce2c80) Create stream I0714 23:37:28.425417 7 log.go:181] (0xc005058e70) (0xc001ce2c80) Stream added, broadcasting: 5 I0714 23:37:28.426360 7 log.go:181] (0xc005058e70) Reply frame received for 5 I0714 23:37:28.483166 7 log.go:181] (0xc005058e70) Data frame received for 5 I0714 23:37:28.483197 7 log.go:181] (0xc001ce2c80) (5) Data frame handling I0714 23:37:28.483227 7 log.go:181] (0xc005058e70) Data frame received for 3 I0714 23:37:28.483240 7 log.go:181] (0xc001270140) (3) Data frame handling I0714 23:37:28.483252 7 log.go:181] (0xc001270140) (3) Data frame sent I0714 23:37:28.483281 7 log.go:181] (0xc005058e70) Data frame received for 3 I0714 23:37:28.483294 7 log.go:181] (0xc001270140) (3) Data frame handling I0714 23:37:28.484546 7 log.go:181] (0xc005058e70) Data frame received for 1 I0714 23:37:28.484575 7 log.go:181] (0xc0004a10e0) (1) Data frame handling I0714 23:37:28.484600 7 log.go:181] (0xc0004a10e0) (1) Data frame sent I0714 23:37:28.484609 7 log.go:181] (0xc005058e70) (0xc0004a10e0) Stream removed, broadcasting: 1 I0714 23:37:28.484676 7 log.go:181] (0xc005058e70) Go away received I0714 23:37:28.484704 7 log.go:181] (0xc005058e70) (0xc0004a10e0) Stream removed, broadcasting: 1 I0714 23:37:28.484720 7 log.go:181] (0xc005058e70) (0xc001270140) Stream removed, broadcasting: 3 I0714 23:37:28.484790 7 log.go:181] (0xc005058e70) (0xc001ce2c80) Stream removed, broadcasting: 5 Jul 14 23:37:28.484: INFO: Exec stderr: "" Jul 14 23:37:28.484: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.484: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.510922 7 log.go:181] (0xc002612370) (0xc001270dc0) Create stream I0714 23:37:28.510951 7 log.go:181] (0xc002612370) (0xc001270dc0) Stream added, broadcasting: 1 I0714 23:37:28.514325 7 log.go:181] (0xc002612370) Reply frame received for 1 I0714 23:37:28.514356 7 log.go:181] (0xc002612370) (0xc0017241e0) Create stream I0714 23:37:28.514368 7 log.go:181] (0xc002612370) (0xc0017241e0) Stream added, broadcasting: 3 I0714 23:37:28.515197 7 log.go:181] (0xc002612370) Reply frame received for 3 I0714 23:37:28.515238 7 log.go:181] (0xc002612370) (0xc001ce2dc0) Create stream I0714 23:37:28.515255 7 log.go:181] (0xc002612370) (0xc001ce2dc0) Stream added, broadcasting: 5 I0714 23:37:28.516168 7 log.go:181] (0xc002612370) Reply frame received for 5 I0714 23:37:28.568782 7 log.go:181] (0xc002612370) Data frame received for 3 I0714 23:37:28.568844 7 log.go:181] (0xc0017241e0) (3) Data frame handling I0714 23:37:28.568863 7 log.go:181] (0xc0017241e0) (3) Data frame sent I0714 23:37:28.568959 7 log.go:181] (0xc002612370) Data frame received for 5 I0714 23:37:28.568976 7 log.go:181] (0xc001ce2dc0) (5) Data frame handling I0714 23:37:28.569002 7 log.go:181] (0xc002612370) Data frame received for 3 I0714 23:37:28.569022 7 log.go:181] (0xc0017241e0) (3) Data frame handling I0714 23:37:28.569698 7 log.go:181] (0xc002612370) Data frame received for 1 I0714 23:37:28.569714 7 log.go:181] (0xc001270dc0) (1) Data frame handling I0714 23:37:28.569725 7 log.go:181] (0xc001270dc0) (1) Data frame sent I0714 23:37:28.569787 7 log.go:181] (0xc002612370) (0xc001270dc0) Stream removed, broadcasting: 1 I0714 23:37:28.569860 7 log.go:181] (0xc002612370) (0xc001270dc0) Stream removed, broadcasting: 1 I0714 23:37:28.569871 7 log.go:181] (0xc002612370) (0xc0017241e0) Stream removed, broadcasting: 3 I0714 23:37:28.569971 7 log.go:181] (0xc002612370) (0xc001ce2dc0) Stream removed, broadcasting: 5 Jul 14 23:37:28.570: INFO: Exec stderr: "" Jul 14 23:37:28.570: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.570: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.570260 7 log.go:181] (0xc002612370) Go away received I0714 23:37:28.616295 7 log.go:181] (0xc001dac580) (0xc001ce3220) Create stream I0714 23:37:28.616318 7 log.go:181] (0xc001dac580) (0xc001ce3220) Stream added, broadcasting: 1 I0714 23:37:28.618657 7 log.go:181] (0xc001dac580) Reply frame received for 1 I0714 23:37:28.618686 7 log.go:181] (0xc001dac580) (0xc001ce32c0) Create stream I0714 23:37:28.618695 7 log.go:181] (0xc001dac580) (0xc001ce32c0) Stream added, broadcasting: 3 I0714 23:37:28.619296 7 log.go:181] (0xc001dac580) Reply frame received for 3 I0714 23:37:28.619335 7 log.go:181] (0xc001dac580) (0xc0015fe5a0) Create stream I0714 23:37:28.619343 7 log.go:181] (0xc001dac580) (0xc0015fe5a0) Stream added, broadcasting: 5 I0714 23:37:28.620007 7 log.go:181] (0xc001dac580) Reply frame received for 5 I0714 23:37:28.679230 7 log.go:181] (0xc001dac580) Data frame received for 5 I0714 23:37:28.679258 7 log.go:181] (0xc0015fe5a0) (5) Data frame handling I0714 23:37:28.679274 7 log.go:181] (0xc001dac580) Data frame received for 3 I0714 23:37:28.679284 7 log.go:181] (0xc001ce32c0) (3) Data frame handling I0714 23:37:28.679292 7 log.go:181] (0xc001ce32c0) (3) Data frame sent I0714 23:37:28.679301 7 log.go:181] (0xc001dac580) Data frame received for 3 I0714 23:37:28.679311 7 log.go:181] (0xc001ce32c0) (3) Data frame handling I0714 23:37:28.680231 7 log.go:181] (0xc001dac580) Data frame received for 1 I0714 23:37:28.680261 7 log.go:181] (0xc001ce3220) (1) Data frame handling I0714 23:37:28.680285 7 log.go:181] (0xc001ce3220) (1) Data frame sent I0714 23:37:28.680302 7 log.go:181] (0xc001dac580) (0xc001ce3220) Stream removed, broadcasting: 1 I0714 23:37:28.680321 7 log.go:181] (0xc001dac580) Go away received I0714 23:37:28.680404 7 log.go:181] (0xc001dac580) (0xc001ce3220) Stream removed, broadcasting: 1 I0714 23:37:28.680422 7 log.go:181] (0xc001dac580) (0xc001ce32c0) Stream removed, broadcasting: 3 I0714 23:37:28.680431 7 log.go:181] (0xc001dac580) (0xc0015fe5a0) Stream removed, broadcasting: 5 Jul 14 23:37:28.680: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 14 23:37:28.680: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.680: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.707340 7 log.go:181] (0xc0026126e0) (0xc001271180) Create stream I0714 23:37:28.707360 7 log.go:181] (0xc0026126e0) (0xc001271180) Stream added, broadcasting: 1 I0714 23:37:28.709078 7 log.go:181] (0xc0026126e0) Reply frame received for 1 I0714 23:37:28.709116 7 log.go:181] (0xc0026126e0) (0xc0015fe640) Create stream I0714 23:37:28.709129 7 log.go:181] (0xc0026126e0) (0xc0015fe640) Stream added, broadcasting: 3 I0714 23:37:28.709772 7 log.go:181] (0xc0026126e0) Reply frame received for 3 I0714 23:37:28.709800 7 log.go:181] (0xc0026126e0) (0xc0004a12c0) Create stream I0714 23:37:28.709812 7 log.go:181] (0xc0026126e0) (0xc0004a12c0) Stream added, broadcasting: 5 I0714 23:37:28.710499 7 log.go:181] (0xc0026126e0) Reply frame received for 5 I0714 23:37:28.753759 7 log.go:181] (0xc0026126e0) Data frame received for 5 I0714 23:37:28.753779 7 log.go:181] (0xc0004a12c0) (5) Data frame handling I0714 23:37:28.753799 7 log.go:181] (0xc0026126e0) Data frame received for 3 I0714 23:37:28.753807 7 log.go:181] (0xc0015fe640) (3) Data frame handling I0714 23:37:28.753819 7 log.go:181] (0xc0015fe640) (3) Data frame sent I0714 23:37:28.753880 7 log.go:181] (0xc0026126e0) Data frame received for 3 I0714 23:37:28.753899 7 log.go:181] (0xc0015fe640) (3) Data frame handling I0714 23:37:28.755431 7 log.go:181] (0xc0026126e0) Data frame received for 1 I0714 23:37:28.755449 7 log.go:181] (0xc001271180) (1) Data frame handling I0714 23:37:28.755460 7 log.go:181] (0xc001271180) (1) Data frame sent I0714 23:37:28.755471 7 log.go:181] (0xc0026126e0) (0xc001271180) Stream removed, broadcasting: 1 I0714 23:37:28.755537 7 log.go:181] (0xc0026126e0) (0xc001271180) Stream removed, broadcasting: 1 I0714 23:37:28.755567 7 log.go:181] (0xc0026126e0) (0xc0015fe640) Stream removed, broadcasting: 3 I0714 23:37:28.755649 7 log.go:181] (0xc0026126e0) Go away received I0714 23:37:28.755687 7 log.go:181] (0xc0026126e0) (0xc0004a12c0) Stream removed, broadcasting: 5 Jul 14 23:37:28.755: INFO: Exec stderr: "" Jul 14 23:37:28.755: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.755: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.777473 7 log.go:181] (0xc001b60370) (0xc001724960) Create stream I0714 23:37:28.777499 7 log.go:181] (0xc001b60370) (0xc001724960) Stream added, broadcasting: 1 I0714 23:37:28.780437 7 log.go:181] (0xc001b60370) Reply frame received for 1 I0714 23:37:28.780490 7 log.go:181] (0xc001b60370) (0xc0015fe820) Create stream I0714 23:37:28.780514 7 log.go:181] (0xc001b60370) (0xc0015fe820) Stream added, broadcasting: 3 I0714 23:37:28.781569 7 log.go:181] (0xc001b60370) Reply frame received for 3 I0714 23:37:28.781643 7 log.go:181] (0xc001b60370) (0xc0015fe960) Create stream I0714 23:37:28.781684 7 log.go:181] (0xc001b60370) (0xc0015fe960) Stream added, broadcasting: 5 I0714 23:37:28.782439 7 log.go:181] (0xc001b60370) Reply frame received for 5 I0714 23:37:28.848046 7 log.go:181] (0xc001b60370) Data frame received for 3 I0714 23:37:28.848066 7 log.go:181] (0xc0015fe820) (3) Data frame handling I0714 23:37:28.848075 7 log.go:181] (0xc0015fe820) (3) Data frame sent I0714 23:37:28.848082 7 log.go:181] (0xc001b60370) Data frame received for 3 I0714 23:37:28.848089 7 log.go:181] (0xc0015fe820) (3) Data frame handling I0714 23:37:28.848108 7 log.go:181] (0xc001b60370) Data frame received for 5 I0714 23:37:28.848114 7 log.go:181] (0xc0015fe960) (5) Data frame handling I0714 23:37:28.849981 7 log.go:181] (0xc001b60370) Data frame received for 1 I0714 23:37:28.850003 7 log.go:181] (0xc001724960) (1) Data frame handling I0714 23:37:28.850020 7 log.go:181] (0xc001724960) (1) Data frame sent I0714 23:37:28.850030 7 log.go:181] (0xc001b60370) (0xc001724960) Stream removed, broadcasting: 1 I0714 23:37:28.850041 7 log.go:181] (0xc001b60370) Go away received I0714 23:37:28.850120 7 log.go:181] (0xc001b60370) (0xc001724960) Stream removed, broadcasting: 1 I0714 23:37:28.850129 7 log.go:181] (0xc001b60370) (0xc0015fe820) Stream removed, broadcasting: 3 I0714 23:37:28.850134 7 log.go:181] (0xc001b60370) (0xc0015fe960) Stream removed, broadcasting: 5 Jul 14 23:37:28.850: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 14 23:37:28.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.850: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.877970 7 log.go:181] (0xc0013856b0) (0xc0015fedc0) Create stream I0714 23:37:28.878002 7 log.go:181] (0xc0013856b0) (0xc0015fedc0) Stream added, broadcasting: 1 I0714 23:37:28.880479 7 log.go:181] (0xc0013856b0) Reply frame received for 1 I0714 23:37:28.880527 7 log.go:181] (0xc0013856b0) (0xc0015fef00) Create stream I0714 23:37:28.880558 7 log.go:181] (0xc0013856b0) (0xc0015fef00) Stream added, broadcasting: 3 I0714 23:37:28.881417 7 log.go:181] (0xc0013856b0) Reply frame received for 3 I0714 23:37:28.881440 7 log.go:181] (0xc0013856b0) (0xc001724a00) Create stream I0714 23:37:28.881453 7 log.go:181] (0xc0013856b0) (0xc001724a00) Stream added, broadcasting: 5 I0714 23:37:28.882295 7 log.go:181] (0xc0013856b0) Reply frame received for 5 I0714 23:37:28.937998 7 log.go:181] (0xc0013856b0) Data frame received for 3 I0714 23:37:28.938020 7 log.go:181] (0xc0015fef00) (3) Data frame handling I0714 23:37:28.938048 7 log.go:181] (0xc0015fef00) (3) Data frame sent I0714 23:37:28.938060 7 log.go:181] (0xc0013856b0) Data frame received for 3 I0714 23:37:28.938066 7 log.go:181] (0xc0015fef00) (3) Data frame handling I0714 23:37:28.938181 7 log.go:181] (0xc0013856b0) Data frame received for 5 I0714 23:37:28.938203 7 log.go:181] (0xc001724a00) (5) Data frame handling I0714 23:37:28.939450 7 log.go:181] (0xc0013856b0) Data frame received for 1 I0714 23:37:28.939494 7 log.go:181] (0xc0015fedc0) (1) Data frame handling I0714 23:37:28.939522 7 log.go:181] (0xc0015fedc0) (1) Data frame sent I0714 23:37:28.939545 7 log.go:181] (0xc0013856b0) (0xc0015fedc0) Stream removed, broadcasting: 1 I0714 23:37:28.939618 7 log.go:181] (0xc0013856b0) Go away received I0714 23:37:28.939767 7 log.go:181] (0xc0013856b0) (0xc0015fedc0) Stream removed, broadcasting: 1 I0714 23:37:28.939787 7 log.go:181] (0xc0013856b0) (0xc0015fef00) Stream removed, broadcasting: 3 I0714 23:37:28.939797 7 log.go:181] (0xc0013856b0) (0xc001724a00) Stream removed, broadcasting: 5 Jul 14 23:37:28.939: INFO: Exec stderr: "" Jul 14 23:37:28.939: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:28.939: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:28.975931 7 log.go:181] (0xc005059550) (0xc002524320) Create stream I0714 23:37:28.975949 7 log.go:181] (0xc005059550) (0xc002524320) Stream added, broadcasting: 1 I0714 23:37:28.977844 7 log.go:181] (0xc005059550) Reply frame received for 1 I0714 23:37:28.977885 7 log.go:181] (0xc005059550) (0xc001724aa0) Create stream I0714 23:37:28.977899 7 log.go:181] (0xc005059550) (0xc001724aa0) Stream added, broadcasting: 3 I0714 23:37:28.978696 7 log.go:181] (0xc005059550) Reply frame received for 3 I0714 23:37:28.978722 7 log.go:181] (0xc005059550) (0xc0025243c0) Create stream I0714 23:37:28.978736 7 log.go:181] (0xc005059550) (0xc0025243c0) Stream added, broadcasting: 5 I0714 23:37:28.979369 7 log.go:181] (0xc005059550) Reply frame received for 5 I0714 23:37:29.038441 7 log.go:181] (0xc005059550) Data frame received for 5 I0714 23:37:29.038476 7 log.go:181] (0xc0025243c0) (5) Data frame handling I0714 23:37:29.038498 7 log.go:181] (0xc005059550) Data frame received for 3 I0714 23:37:29.038510 7 log.go:181] (0xc001724aa0) (3) Data frame handling I0714 23:37:29.038551 7 log.go:181] (0xc001724aa0) (3) Data frame sent I0714 23:37:29.038567 7 log.go:181] (0xc005059550) Data frame received for 3 I0714 23:37:29.038581 7 log.go:181] (0xc001724aa0) (3) Data frame handling I0714 23:37:29.039619 7 log.go:181] (0xc005059550) Data frame received for 1 I0714 23:37:29.039628 7 log.go:181] (0xc002524320) (1) Data frame handling I0714 23:37:29.039634 7 log.go:181] (0xc002524320) (1) Data frame sent I0714 23:37:29.039702 7 log.go:181] (0xc005059550) (0xc002524320) Stream removed, broadcasting: 1 I0714 23:37:29.039712 7 log.go:181] (0xc005059550) Go away received I0714 23:37:29.039815 7 log.go:181] (0xc005059550) (0xc002524320) Stream removed, broadcasting: 1 I0714 23:37:29.039855 7 log.go:181] (0xc005059550) (0xc001724aa0) Stream removed, broadcasting: 3 I0714 23:37:29.039901 7 log.go:181] (0xc005059550) (0xc0025243c0) Stream removed, broadcasting: 5 Jul 14 23:37:29.039: INFO: Exec stderr: "" Jul 14 23:37:29.039: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:29.040: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:29.072680 7 log.go:181] (0xc001b609a0) (0xc0017250e0) Create stream I0714 23:37:29.072711 7 log.go:181] (0xc001b609a0) (0xc0017250e0) Stream added, broadcasting: 1 I0714 23:37:29.074505 7 log.go:181] (0xc001b609a0) Reply frame received for 1 I0714 23:37:29.074530 7 log.go:181] (0xc001b609a0) (0xc001ce34a0) Create stream I0714 23:37:29.074538 7 log.go:181] (0xc001b609a0) (0xc001ce34a0) Stream added, broadcasting: 3 I0714 23:37:29.075180 7 log.go:181] (0xc001b609a0) Reply frame received for 3 I0714 23:37:29.075233 7 log.go:181] (0xc001b609a0) (0xc0015ff040) Create stream I0714 23:37:29.075247 7 log.go:181] (0xc001b609a0) (0xc0015ff040) Stream added, broadcasting: 5 I0714 23:37:29.075907 7 log.go:181] (0xc001b609a0) Reply frame received for 5 I0714 23:37:29.124462 7 log.go:181] (0xc001b609a0) Data frame received for 5 I0714 23:37:29.124512 7 log.go:181] (0xc0015ff040) (5) Data frame handling I0714 23:37:29.124542 7 log.go:181] (0xc001b609a0) Data frame received for 3 I0714 23:37:29.124556 7 log.go:181] (0xc001ce34a0) (3) Data frame handling I0714 23:37:29.124578 7 log.go:181] (0xc001ce34a0) (3) Data frame sent I0714 23:37:29.124593 7 log.go:181] (0xc001b609a0) Data frame received for 3 I0714 23:37:29.124605 7 log.go:181] (0xc001ce34a0) (3) Data frame handling I0714 23:37:29.125080 7 log.go:181] (0xc001b609a0) Data frame received for 1 I0714 23:37:29.125093 7 log.go:181] (0xc0017250e0) (1) Data frame handling I0714 23:37:29.125099 7 log.go:181] (0xc0017250e0) (1) Data frame sent I0714 23:37:29.125230 7 log.go:181] (0xc001b609a0) (0xc0017250e0) Stream removed, broadcasting: 1 I0714 23:37:29.125262 7 log.go:181] (0xc001b609a0) Go away received I0714 23:37:29.125333 7 log.go:181] (0xc001b609a0) (0xc0017250e0) Stream removed, broadcasting: 1 I0714 23:37:29.125356 7 log.go:181] (0xc001b609a0) (0xc001ce34a0) Stream removed, broadcasting: 3 I0714 23:37:29.125367 7 log.go:181] (0xc001b609a0) (0xc0015ff040) Stream removed, broadcasting: 5 Jul 14 23:37:29.125: INFO: Exec stderr: "" Jul 14 23:37:29.125: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5635 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 14 23:37:29.125: INFO: >>> kubeConfig: /root/.kube/config I0714 23:37:29.190833 7 log.go:181] (0xc005059b80) (0xc002524820) Create stream I0714 23:37:29.190861 7 log.go:181] (0xc005059b80) (0xc002524820) Stream added, broadcasting: 1 I0714 23:37:29.193884 7 log.go:181] (0xc005059b80) Reply frame received for 1 I0714 23:37:29.193925 7 log.go:181] (0xc005059b80) (0xc001271220) Create stream I0714 23:37:29.193942 7 log.go:181] (0xc005059b80) (0xc001271220) Stream added, broadcasting: 3 I0714 23:37:29.194572 7 log.go:181] (0xc005059b80) Reply frame received for 3 I0714 23:37:29.194622 7 log.go:181] (0xc005059b80) (0xc0015ff0e0) Create stream I0714 23:37:29.194642 7 log.go:181] (0xc005059b80) (0xc0015ff0e0) Stream added, broadcasting: 5 I0714 23:37:29.195351 7 log.go:181] (0xc005059b80) Reply frame received for 5 I0714 23:37:29.244623 7 log.go:181] (0xc005059b80) Data frame received for 3 I0714 23:37:29.244652 7 log.go:181] (0xc001271220) (3) Data frame handling I0714 23:37:29.244699 7 log.go:181] (0xc001271220) (3) Data frame sent I0714 23:37:29.244715 7 log.go:181] (0xc005059b80) Data frame received for 3 I0714 23:37:29.244774 7 log.go:181] (0xc001271220) (3) Data frame handling I0714 23:37:29.244790 7 log.go:181] (0xc005059b80) Data frame received for 5 I0714 23:37:29.244798 7 log.go:181] (0xc0015ff0e0) (5) Data frame handling I0714 23:37:29.245974 7 log.go:181] (0xc005059b80) Data frame received for 1 I0714 23:37:29.245993 7 log.go:181] (0xc002524820) (1) Data frame handling I0714 23:37:29.246002 7 log.go:181] (0xc002524820) (1) Data frame sent I0714 23:37:29.246104 7 log.go:181] (0xc005059b80) (0xc002524820) Stream removed, broadcasting: 1 I0714 23:37:29.246158 7 log.go:181] (0xc005059b80) (0xc002524820) Stream removed, broadcasting: 1 I0714 23:37:29.246166 7 log.go:181] (0xc005059b80) (0xc001271220) Stream removed, broadcasting: 3 I0714 23:37:29.246238 7 log.go:181] (0xc005059b80) (0xc0015ff0e0) Stream removed, broadcasting: 5 I0714 23:37:29.246301 7 log.go:181] (0xc005059b80) Go away received Jul 14 23:37:29.246: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:37:29.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5635" for this suite. • [SLOW TEST:15.239 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":27,"skipped":618,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:37:29.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 14 23:37:29.969: INFO: Pod name wrapped-volume-race-4b5ffe2f-7489-4b92-ba17-a89401610285: Found 0 pods out of 5 Jul 14 23:37:34.977: INFO: Pod name wrapped-volume-race-4b5ffe2f-7489-4b92-ba17-a89401610285: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4b5ffe2f-7489-4b92-ba17-a89401610285 in namespace emptydir-wrapper-1866, will wait for the garbage collector to delete the pods Jul 14 23:37:51.066: INFO: Deleting ReplicationController wrapped-volume-race-4b5ffe2f-7489-4b92-ba17-a89401610285 took: 8.310408ms Jul 14 23:37:51.466: INFO: Terminating ReplicationController wrapped-volume-race-4b5ffe2f-7489-4b92-ba17-a89401610285 pods took: 400.240938ms STEP: Creating RC which spawns configmap-volume pods Jul 14 23:38:09.432: INFO: Pod name wrapped-volume-race-5aa00736-cb1c-4ddd-b9b6-bb303895a98d: Found 0 pods out of 5 Jul 14 23:38:14.447: INFO: Pod name wrapped-volume-race-5aa00736-cb1c-4ddd-b9b6-bb303895a98d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5aa00736-cb1c-4ddd-b9b6-bb303895a98d in namespace emptydir-wrapper-1866, will wait for the garbage collector to delete the pods Jul 14 23:38:30.574: INFO: Deleting ReplicationController wrapped-volume-race-5aa00736-cb1c-4ddd-b9b6-bb303895a98d took: 52.197344ms Jul 14 23:38:31.075: INFO: Terminating ReplicationController wrapped-volume-race-5aa00736-cb1c-4ddd-b9b6-bb303895a98d pods took: 501.240094ms STEP: Creating RC which spawns configmap-volume pods Jul 14 23:38:39.367: INFO: Pod name wrapped-volume-race-eb735f5b-d75c-4ed1-a4db-c10e2394f638: Found 0 pods out of 5 Jul 14 23:38:44.376: INFO: Pod name wrapped-volume-race-eb735f5b-d75c-4ed1-a4db-c10e2394f638: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-eb735f5b-d75c-4ed1-a4db-c10e2394f638 in namespace emptydir-wrapper-1866, will wait for the garbage collector to delete the pods Jul 14 23:39:00.523: INFO: Deleting ReplicationController wrapped-volume-race-eb735f5b-d75c-4ed1-a4db-c10e2394f638 took: 18.860796ms Jul 14 23:39:00.924: INFO: Terminating ReplicationController wrapped-volume-race-eb735f5b-d75c-4ed1-a4db-c10e2394f638 pods took: 400.263868ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:39:09.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1866" for this suite. • [SLOW TEST:100.697 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":294,"completed":28,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:39:09.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 14 23:39:10.019: INFO: PodSpec: initContainers in spec.initContainers Jul 14 23:40:06.974: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b5d2ef06-a7a5-4e2b-a101-590212ab799f", GenerateName:"", Namespace:"init-container-8151", SelfLink:"/api/v1/namespaces/init-container-8151/pods/pod-init-b5d2ef06-a7a5-4e2b-a101-590212ab799f", UID:"95ec4cbb-3f23-423f-a363-c4809c151b43", ResourceVersion:"1212447", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730366750, loc:(*time.Location)(0x7deddc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"19920919"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b7c1a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b7c1c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b7c1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b7c200)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kd6n2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003c66100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd6n2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd6n2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd6n2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d81348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d521c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d813d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d813f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000d813f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000d813fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366750, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366750, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366750, loc:(*time.Location)(0x7deddc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366750, loc:(*time.Location)(0x7deddc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.2.208", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.208"}}, StartTime:(*v1.Time)(0xc003b7c220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d52310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d523f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://28b3602affb2aa5ee17637d52a5baecbfe38756fe261a016cf203e39344442ef", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003b7c260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003b7c240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc000d815ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:40:06.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8151" for this suite. • [SLOW TEST:57.059 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":294,"completed":29,"skipped":650,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:40:07.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 14 23:40:07.333: INFO: Waiting up to 5m0s for pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2" in namespace "emptydir-5757" to be "Succeeded or Failed" Jul 14 23:40:07.391: INFO: Pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.608324ms Jul 14 23:40:09.444: INFO: Pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111628203s Jul 14 23:40:11.468: INFO: Pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134994163s Jul 14 23:40:13.471: INFO: Pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137890725s STEP: Saw pod success Jul 14 23:40:13.471: INFO: Pod "pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2" satisfied condition "Succeeded or Failed" Jul 14 23:40:13.473: INFO: Trying to get logs from node latest-worker2 pod pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2 container test-container: STEP: delete the pod Jul 14 23:40:13.508: INFO: Waiting for pod pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2 to disappear Jul 14 23:40:13.542: INFO: Pod pod-46e2b89c-dfb8-45bb-b1e4-a032f5acc6e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:40:13.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5757" for this suite. • [SLOW TEST:6.529 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":30,"skipped":651,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:40:13.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 14 23:40:13.635: INFO: Waiting up to 5m0s for pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64" in namespace "emptydir-5122" to be "Succeeded or Failed" Jul 14 23:40:13.681: INFO: Pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64": Phase="Pending", Reason="", readiness=false. Elapsed: 45.287512ms Jul 14 23:40:16.076: INFO: Pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440509329s Jul 14 23:40:18.101: INFO: Pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64": Phase="Running", Reason="", readiness=true. Elapsed: 4.465413095s Jul 14 23:40:20.118: INFO: Pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.482310441s STEP: Saw pod success Jul 14 23:40:20.118: INFO: Pod "pod-0eaa9296-6c5c-47ce-b898-91995451da64" satisfied condition "Succeeded or Failed" Jul 14 23:40:20.149: INFO: Trying to get logs from node latest-worker2 pod pod-0eaa9296-6c5c-47ce-b898-91995451da64 container test-container: STEP: delete the pod Jul 14 23:40:20.302: INFO: Waiting for pod pod-0eaa9296-6c5c-47ce-b898-91995451da64 to disappear Jul 14 23:40:20.305: INFO: Pod pod-0eaa9296-6c5c-47ce-b898-91995451da64 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:40:20.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5122" for this suite. • [SLOW TEST:6.761 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":31,"skipped":656,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:40:20.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4353.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4353.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4353.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 14 23:40:29.500: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.503: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.506: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.508: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.517: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.519: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.522: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.525: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:29.531: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:34.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.543: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.547: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.557: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.561: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.564: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.567: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:34.575: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:39.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.543: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.546: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.554: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.557: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.559: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.562: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:39.567: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:44.535: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.538: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.541: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.567: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.575: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.578: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.581: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.584: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:44.589: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:49.536: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.540: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.543: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.546: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.594: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.597: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.600: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.602: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:49.608: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:54.646: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.650: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.654: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.657: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.666: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.670: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.673: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.677: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local from pod dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61: the server could not find the requested resource (get pods dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61) Jul 14 23:40:54.683: INFO: Lookups using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4353.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4353.svc.cluster.local jessie_udp@dns-test-service-2.dns-4353.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4353.svc.cluster.local] Jul 14 23:40:59.574: INFO: DNS probes using dns-4353/dns-test-42610e3d-4bfa-4034-82b4-24336b8ffc61 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:40:59.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4353" for this suite. • [SLOW TEST:39.788 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":294,"completed":32,"skipped":676,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:00.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:41:11.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-349" for this suite. • [SLOW TEST:11.182 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":294,"completed":33,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:11.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 14 23:41:11.392: INFO: Waiting up to 5m0s for pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce" in namespace "downward-api-7920" to be "Succeeded or Failed" Jul 14 23:41:11.395: INFO: Pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741455ms Jul 14 23:41:13.546: INFO: Pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153818325s Jul 14 23:41:15.561: INFO: Pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.168614461s Jul 14 23:41:17.565: INFO: Pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172843766s STEP: Saw pod success Jul 14 23:41:17.565: INFO: Pod "downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce" satisfied condition "Succeeded or Failed" Jul 14 23:41:17.569: INFO: Trying to get logs from node latest-worker2 pod downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce container dapi-container: STEP: delete the pod Jul 14 23:41:17.609: INFO: Waiting for pod downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce to disappear Jul 14 23:41:17.620: INFO: Pod downward-api-4ac31bf4-c4e5-485d-a234-36ef9b9018ce no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:41:17.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7920" for this suite. • [SLOW TEST:6.345 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":294,"completed":34,"skipped":723,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:17.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-2a8c0fc0-5017-4752-aab2-4b5587fca7ce STEP: Creating a pod to test consume configMaps Jul 14 23:41:17.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432" in namespace "projected-3937" to be "Succeeded or Failed" Jul 14 23:41:17.794: INFO: Pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407603ms Jul 14 23:41:19.798: INFO: Pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006504732s Jul 14 23:41:21.802: INFO: Pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010798701s Jul 14 23:41:23.806: INFO: Pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014837203s STEP: Saw pod success Jul 14 23:41:23.806: INFO: Pod "pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432" satisfied condition "Succeeded or Failed" Jul 14 23:41:23.809: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432 container projected-configmap-volume-test: STEP: delete the pod Jul 14 23:41:23.897: INFO: Waiting for pod pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432 to disappear Jul 14 23:41:23.907: INFO: Pod pod-projected-configmaps-958f826d-9bda-401c-b0bf-2f9d9449b432 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:41:23.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3937" for this suite. • [SLOW TEST:6.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":35,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:23.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:41:24.675: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:41:27.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:41:29.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730366884, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:41:32.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jul 14 23:41:36.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config attach --namespace=webhook-2777 to-be-attached-pod -i -c=container1' Jul 14 23:41:39.462: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:41:39.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2777" for this suite. STEP: Destroying namespace "webhook-2777-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.670 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":294,"completed":36,"skipped":750,"failed":0} [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:39.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 14 23:41:39.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3168' Jul 14 23:41:39.788: INFO: stderr: "" Jul 14 23:41:39.788: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jul 14 23:41:44.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3168 -o json' Jul 14 23:41:44.944: INFO: stderr: "" Jul 14 23:41:44.944: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-14T23:41:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-14T23:41:39Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.68\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-14T23:41:44Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3168\",\n \"resourceVersion\": \"1213244\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3168/pods/e2e-test-httpd-pod\",\n \"uid\": \"b71c7b20-4b99-482f-81d0-71a452bc7f1d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-m8tkx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-m8tkx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-m8tkx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-14T23:41:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-14T23:41:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-14T23:41:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-14T23:41:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6904a90242dc61b4610e5dbfd4ddfdcd5c19d0b0ff0c61ac71484a2bb3592ed5\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-14T23:41:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.68\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.68\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-14T23:41:39Z\"\n }\n}\n" STEP: replace the image in the pod Jul 14 23:41:44.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3168' Jul 14 23:41:45.269: INFO: stderr: "" Jul 14 23:41:45.269: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 Jul 14 23:41:45.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3168' Jul 14 23:41:59.178: INFO: stderr: "" Jul 14 23:41:59.178: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:41:59.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3168" for this suite. • [SLOW TEST:19.597 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1572 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":294,"completed":37,"skipped":750,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:41:59.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:42:34.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2978" for this suite. • [SLOW TEST:35.407 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":294,"completed":38,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:42:34.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 14 23:42:34.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71" in namespace "projected-2026" to be "Succeeded or Failed" Jul 14 23:42:34.733: INFO: Pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.367067ms Jul 14 23:42:36.832: INFO: Pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101779275s Jul 14 23:42:38.842: INFO: Pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111530175s Jul 14 23:42:40.845: INFO: Pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115188709s STEP: Saw pod success Jul 14 23:42:40.846: INFO: Pod "downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71" satisfied condition "Succeeded or Failed" Jul 14 23:42:40.848: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71 container client-container: STEP: delete the pod Jul 14 23:42:40.883: INFO: Waiting for pod downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71 to disappear Jul 14 23:42:41.030: INFO: Pod downwardapi-volume-7f0022b4-9f93-48db-beb2-b8a77cd48d71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:42:41.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2026" for this suite. • [SLOW TEST:6.598 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":39,"skipped":778,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:42:41.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-06ab9f9b-0f90-4b96-9f9d-7809068c7380 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:42:47.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4754" for this suite. • [SLOW TEST:6.651 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":40,"skipped":792,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:42:47.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 14 23:42:47.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c" in namespace "projected-9094" to be "Succeeded or Failed" Jul 14 23:42:47.970: INFO: Pod "downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.145557ms Jul 14 23:42:49.994: INFO: Pod "downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041097975s Jul 14 23:42:51.998: INFO: Pod "downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045230659s STEP: Saw pod success Jul 14 23:42:51.998: INFO: Pod "downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c" satisfied condition "Succeeded or Failed" Jul 14 23:42:52.000: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c container client-container: STEP: delete the pod Jul 14 23:42:52.026: INFO: Waiting for pod downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c to disappear Jul 14 23:42:52.030: INFO: Pod downwardapi-volume-3871963d-9298-4895-ab43-116bf2cee35c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:42:52.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9094" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":41,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:42:52.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0714 23:42:53.522992 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:42:55.559: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:42:55.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9794" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":294,"completed":42,"skipped":837,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:42:55.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:42:55.958: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 14 23:42:55.983: INFO: Number of nodes with available pods: 0 Jul 14 23:42:55.983: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 14 23:42:56.141: INFO: Number of nodes with available pods: 0 Jul 14 23:42:56.141: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:42:57.145: INFO: Number of nodes with available pods: 0 Jul 14 23:42:57.145: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:42:58.145: INFO: Number of nodes with available pods: 0 Jul 14 23:42:58.145: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:42:59.143: INFO: Number of nodes with available pods: 0 Jul 14 23:42:59.143: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:00.145: INFO: Number of nodes with available pods: 1 Jul 14 23:43:00.145: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 14 23:43:00.182: INFO: Number of nodes with available pods: 1 Jul 14 23:43:00.182: INFO: Number of running nodes: 0, number of available pods: 1 Jul 14 23:43:01.194: INFO: Number of nodes with available pods: 0 Jul 14 23:43:01.194: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 14 23:43:01.259: INFO: Number of nodes with available pods: 0 Jul 14 23:43:01.259: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:02.300: INFO: Number of nodes with available pods: 0 Jul 14 23:43:02.300: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:03.264: INFO: Number of nodes with available pods: 0 Jul 14 23:43:03.264: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:04.265: INFO: Number of nodes with available pods: 0 Jul 14 23:43:04.265: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:05.264: INFO: Number of nodes with available pods: 0 Jul 14 23:43:05.264: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:06.265: INFO: Number of nodes with available pods: 0 Jul 14 23:43:06.265: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:07.264: INFO: Number of nodes with available pods: 0 Jul 14 23:43:07.264: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:08.264: INFO: Number of nodes with available pods: 0 Jul 14 23:43:08.264: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:09.267: INFO: Number of nodes with available pods: 0 Jul 14 23:43:09.267: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:10.264: INFO: Number of nodes with available pods: 0 Jul 14 23:43:10.264: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:11.443: INFO: Number of nodes with available pods: 0 Jul 14 23:43:11.444: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:12.366: INFO: Number of nodes with available pods: 0 Jul 14 23:43:12.366: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:43:13.263: INFO: Number of nodes with available pods: 1 Jul 14 23:43:13.263: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7136, will wait for the garbage collector to delete the pods Jul 14 23:43:13.331: INFO: Deleting DaemonSet.extensions daemon-set took: 8.10599ms Jul 14 23:43:13.631: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232225ms Jul 14 23:43:18.035: INFO: Number of nodes with available pods: 0 Jul 14 23:43:18.035: INFO: Number of running nodes: 0, number of available pods: 0 Jul 14 23:43:18.041: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7136/daemonsets","resourceVersion":"1213979"},"items":null} Jul 14 23:43:18.044: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7136/pods","resourceVersion":"1213979"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:43:18.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7136" for this suite. • [SLOW TEST:22.516 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":294,"completed":43,"skipped":855,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:43:18.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 14 23:43:18.466: INFO: Waiting up to 5m0s for pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d" in namespace "emptydir-9998" to be "Succeeded or Failed" Jul 14 23:43:18.468: INFO: Pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257165ms Jul 14 23:43:20.472: INFO: Pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006740287s Jul 14 23:43:22.477: INFO: Pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011551505s Jul 14 23:43:24.481: INFO: Pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014885363s STEP: Saw pod success Jul 14 23:43:24.481: INFO: Pod "pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d" satisfied condition "Succeeded or Failed" Jul 14 23:43:24.484: INFO: Trying to get logs from node latest-worker2 pod pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d container test-container: STEP: delete the pod Jul 14 23:43:24.516: INFO: Waiting for pod pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d to disappear Jul 14 23:43:24.529: INFO: Pod pod-5a0e3df5-b0b9-493b-84a9-8d7400591d1d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:43:24.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9998" for this suite. • [SLOW TEST:6.452 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":44,"skipped":863,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:43:24.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:43:40.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-925" for this suite. • [SLOW TEST:16.300 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":294,"completed":45,"skipped":868,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:43:40.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 14 23:43:41.068: INFO: Waiting up to 5m0s for pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11" in namespace "downward-api-9801" to be "Succeeded or Failed" Jul 14 23:43:41.111: INFO: Pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11": Phase="Pending", Reason="", readiness=false. Elapsed: 43.465231ms Jul 14 23:43:43.120: INFO: Pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052687366s Jul 14 23:43:45.125: INFO: Pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11": Phase="Running", Reason="", readiness=true. Elapsed: 4.056902288s Jul 14 23:43:47.128: INFO: Pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060626788s STEP: Saw pod success Jul 14 23:43:47.128: INFO: Pod "downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11" satisfied condition "Succeeded or Failed" Jul 14 23:43:47.131: INFO: Trying to get logs from node latest-worker pod downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11 container dapi-container: STEP: delete the pod Jul 14 23:43:47.254: INFO: Waiting for pod downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11 to disappear Jul 14 23:43:47.276: INFO: Pod downward-api-60b7ab0d-2c1a-4a22-bdc5-8ade22df8c11 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:43:47.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9801" for this suite. • [SLOW TEST:6.450 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":294,"completed":46,"skipped":870,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:43:47.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:43:48.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:43:50.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367028, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367028, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367028, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367028, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:43:53.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:43:53.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4367" for this suite. STEP: Destroying namespace "webhook-4367-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.779 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":294,"completed":47,"skipped":870,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:43:54.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1417.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1417.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1417.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1417.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 14 23:44:00.868: INFO: DNS probes using dns-1417/dns-test-f6049852-5b2b-48b7-9036-c77f8ae43ed0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:44:00.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1417" for this suite. • [SLOW TEST:6.961 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":294,"completed":48,"skipped":879,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:44:01.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-87f0e3d8-82ae-42fe-8a8c-48649866b50b STEP: Creating a pod to test consume secrets Jul 14 23:44:01.412: INFO: Waiting up to 5m0s for pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f" in namespace "secrets-9903" to be "Succeeded or Failed" Jul 14 23:44:01.529: INFO: Pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 116.957808ms Jul 14 23:44:03.551: INFO: Pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139336127s Jul 14 23:44:05.556: INFO: Pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143794763s Jul 14 23:44:07.561: INFO: Pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149328563s STEP: Saw pod success Jul 14 23:44:07.562: INFO: Pod "pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f" satisfied condition "Succeeded or Failed" Jul 14 23:44:07.565: INFO: Trying to get logs from node latest-worker pod pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f container secret-volume-test: STEP: delete the pod Jul 14 23:44:07.597: INFO: Waiting for pod pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f to disappear Jul 14 23:44:07.603: INFO: Pod pod-secrets-a4a8c199-2d6c-4017-a8c6-40263139dd4f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:44:07.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9903" for this suite. • [SLOW TEST:6.584 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":49,"skipped":887,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:44:07.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 14 23:44:12.277: INFO: Successfully updated pod "labelsupdate642f2e92-bf4f-4e3c-9e9a-df85d3532f8d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:44:14.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7013" for this suite. • [SLOW TEST:6.690 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":50,"skipped":898,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:44:14.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:44:14.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5647" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":294,"completed":51,"skipped":910,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:44:14.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3806 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 14 23:44:14.604: INFO: Found 0 stateful pods, waiting for 3 Jul 14 23:44:24.649: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 14 23:44:24.649: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 14 23:44:24.649: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 14 23:44:34.610: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 14 23:44:34.610: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 14 23:44:34.610: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 14 23:44:34.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3806 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 14 23:44:34.891: INFO: stderr: "I0714 23:44:34.759740 243 log.go:181] (0xc000e4b080) (0xc000d1c820) Create stream\nI0714 23:44:34.759834 243 log.go:181] (0xc000e4b080) (0xc000d1c820) Stream added, broadcasting: 1\nI0714 23:44:34.766849 243 log.go:181] (0xc000e4b080) Reply frame received for 1\nI0714 23:44:34.766891 243 log.go:181] (0xc000e4b080) (0xc0009430e0) Create stream\nI0714 23:44:34.766915 243 log.go:181] (0xc000e4b080) (0xc0009430e0) Stream added, broadcasting: 3\nI0714 23:44:34.767942 243 log.go:181] (0xc000e4b080) Reply frame received for 3\nI0714 23:44:34.767985 243 log.go:181] (0xc000e4b080) (0xc0007a6aa0) Create stream\nI0714 23:44:34.768012 243 log.go:181] (0xc000e4b080) (0xc0007a6aa0) Stream added, broadcasting: 5\nI0714 23:44:34.768859 243 log.go:181] (0xc000e4b080) Reply frame received for 5\nI0714 23:44:34.853009 243 log.go:181] (0xc000e4b080) Data frame received for 5\nI0714 23:44:34.853041 243 log.go:181] (0xc0007a6aa0) (5) Data frame handling\nI0714 23:44:34.853072 243 log.go:181] (0xc0007a6aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0714 23:44:34.882025 243 log.go:181] (0xc000e4b080) Data frame received for 3\nI0714 23:44:34.882048 243 log.go:181] (0xc0009430e0) (3) Data frame handling\nI0714 23:44:34.882063 243 log.go:181] (0xc0009430e0) (3) Data frame sent\nI0714 23:44:34.882468 243 log.go:181] (0xc000e4b080) Data frame received for 5\nI0714 23:44:34.882503 243 log.go:181] (0xc0007a6aa0) (5) Data frame handling\nI0714 23:44:34.882733 243 log.go:181] (0xc000e4b080) Data frame received for 3\nI0714 23:44:34.882766 243 log.go:181] (0xc0009430e0) (3) Data frame handling\nI0714 23:44:34.886861 243 log.go:181] (0xc000e4b080) Data frame received for 1\nI0714 23:44:34.886890 243 log.go:181] (0xc000d1c820) (1) Data frame handling\nI0714 23:44:34.886920 243 log.go:181] (0xc000d1c820) (1) Data frame sent\nI0714 23:44:34.886940 243 log.go:181] (0xc000e4b080) (0xc000d1c820) Stream removed, broadcasting: 1\nI0714 23:44:34.886970 243 log.go:181] (0xc000e4b080) Go away received\nI0714 23:44:34.887359 243 log.go:181] (0xc000e4b080) (0xc000d1c820) Stream removed, broadcasting: 1\nI0714 23:44:34.887387 243 log.go:181] (0xc000e4b080) (0xc0009430e0) Stream removed, broadcasting: 3\nI0714 23:44:34.887406 243 log.go:181] (0xc000e4b080) (0xc0007a6aa0) Stream removed, broadcasting: 5\n" Jul 14 23:44:34.891: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 14 23:44:34.891: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 14 23:44:44.923: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 14 23:44:54.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3806 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 14 23:44:55.248: INFO: stderr: "I0714 23:44:55.144364 261 log.go:181] (0xc000dab6b0) (0xc0002fa000) Create stream\nI0714 23:44:55.144445 261 log.go:181] (0xc000dab6b0) (0xc0002fa000) Stream added, broadcasting: 1\nI0714 23:44:55.147297 261 log.go:181] (0xc000dab6b0) Reply frame received for 1\nI0714 23:44:55.147369 261 log.go:181] (0xc000dab6b0) (0xc0004965a0) Create stream\nI0714 23:44:55.147401 261 log.go:181] (0xc000dab6b0) (0xc0004965a0) Stream added, broadcasting: 3\nI0714 23:44:55.148212 261 log.go:181] (0xc000dab6b0) Reply frame received for 3\nI0714 23:44:55.148246 261 log.go:181] (0xc000dab6b0) (0xc0004b2280) Create stream\nI0714 23:44:55.148261 261 log.go:181] (0xc000dab6b0) (0xc0004b2280) Stream added, broadcasting: 5\nI0714 23:44:55.149031 261 log.go:181] (0xc000dab6b0) Reply frame received for 5\nI0714 23:44:55.241857 261 log.go:181] (0xc000dab6b0) Data frame received for 5\nI0714 23:44:55.241905 261 log.go:181] (0xc0004b2280) (5) Data frame handling\nI0714 23:44:55.241935 261 log.go:181] (0xc0004b2280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0714 23:44:55.241969 261 log.go:181] (0xc000dab6b0) Data frame received for 3\nI0714 23:44:55.242005 261 log.go:181] (0xc0004965a0) (3) Data frame handling\nI0714 23:44:55.242019 261 log.go:181] (0xc0004965a0) (3) Data frame sent\nI0714 23:44:55.242034 261 log.go:181] (0xc000dab6b0) Data frame received for 3\nI0714 23:44:55.242045 261 log.go:181] (0xc0004965a0) (3) Data frame handling\nI0714 23:44:55.242079 261 log.go:181] (0xc000dab6b0) Data frame received for 5\nI0714 23:44:55.242091 261 log.go:181] (0xc0004b2280) (5) Data frame handling\nI0714 23:44:55.243733 261 log.go:181] (0xc000dab6b0) Data frame received for 1\nI0714 23:44:55.243748 261 log.go:181] (0xc0002fa000) (1) Data frame handling\nI0714 23:44:55.243756 261 log.go:181] (0xc0002fa000) (1) Data frame sent\nI0714 23:44:55.243765 261 log.go:181] (0xc000dab6b0) (0xc0002fa000) Stream removed, broadcasting: 1\nI0714 23:44:55.243812 261 log.go:181] (0xc000dab6b0) Go away received\nI0714 23:44:55.244067 261 log.go:181] (0xc000dab6b0) (0xc0002fa000) Stream removed, broadcasting: 1\nI0714 23:44:55.244080 261 log.go:181] (0xc000dab6b0) (0xc0004965a0) Stream removed, broadcasting: 3\nI0714 23:44:55.244086 261 log.go:181] (0xc000dab6b0) (0xc0004b2280) Stream removed, broadcasting: 5\n" Jul 14 23:44:55.248: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 14 23:44:55.248: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 14 23:45:05.265: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:45:05.265: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:05.265: INFO: Waiting for Pod statefulset-3806/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:05.265: INFO: Waiting for Pod statefulset-3806/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:15.303: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:45:15.303: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:15.303: INFO: Waiting for Pod statefulset-3806/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:25.275: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:45:25.275: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 14 23:45:35.273: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:45:35.273: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jul 14 23:45:45.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3806 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 14 23:45:45.533: INFO: stderr: "I0714 23:45:45.396354 279 log.go:181] (0xc000b23290) (0xc0004e5720) Create stream\nI0714 23:45:45.396444 279 log.go:181] (0xc000b23290) (0xc0004e5720) Stream added, broadcasting: 1\nI0714 23:45:45.398913 279 log.go:181] (0xc000b23290) Reply frame received for 1\nI0714 23:45:45.398952 279 log.go:181] (0xc000b23290) (0xc000b8d0e0) Create stream\nI0714 23:45:45.398966 279 log.go:181] (0xc000b23290) (0xc000b8d0e0) Stream added, broadcasting: 3\nI0714 23:45:45.399923 279 log.go:181] (0xc000b23290) Reply frame received for 3\nI0714 23:45:45.399986 279 log.go:181] (0xc000b23290) (0xc000b8d5e0) Create stream\nI0714 23:45:45.400020 279 log.go:181] (0xc000b23290) (0xc000b8d5e0) Stream added, broadcasting: 5\nI0714 23:45:45.401148 279 log.go:181] (0xc000b23290) Reply frame received for 5\nI0714 23:45:45.481329 279 log.go:181] (0xc000b23290) Data frame received for 5\nI0714 23:45:45.481356 279 log.go:181] (0xc000b8d5e0) (5) Data frame handling\nI0714 23:45:45.481371 279 log.go:181] (0xc000b8d5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0714 23:45:45.524169 279 log.go:181] (0xc000b23290) Data frame received for 3\nI0714 23:45:45.524196 279 log.go:181] (0xc000b8d0e0) (3) Data frame handling\nI0714 23:45:45.524246 279 log.go:181] (0xc000b8d0e0) (3) Data frame sent\nI0714 23:45:45.524679 279 log.go:181] (0xc000b23290) Data frame received for 5\nI0714 23:45:45.524784 279 log.go:181] (0xc000b8d5e0) (5) Data frame handling\nI0714 23:45:45.524831 279 log.go:181] (0xc000b23290) Data frame received for 3\nI0714 23:45:45.524875 279 log.go:181] (0xc000b8d0e0) (3) Data frame handling\nI0714 23:45:45.526988 279 log.go:181] (0xc000b23290) Data frame received for 1\nI0714 23:45:45.527031 279 log.go:181] (0xc0004e5720) (1) Data frame handling\nI0714 23:45:45.527061 279 log.go:181] (0xc0004e5720) (1) Data frame sent\nI0714 23:45:45.527086 279 log.go:181] (0xc000b23290) (0xc0004e5720) Stream removed, broadcasting: 1\nI0714 23:45:45.527486 279 log.go:181] (0xc000b23290) Go away received\nI0714 23:45:45.527973 279 log.go:181] (0xc000b23290) (0xc0004e5720) Stream removed, broadcasting: 1\nI0714 23:45:45.528000 279 log.go:181] (0xc000b23290) (0xc000b8d0e0) Stream removed, broadcasting: 3\nI0714 23:45:45.528013 279 log.go:181] (0xc000b23290) (0xc000b8d5e0) Stream removed, broadcasting: 5\n" Jul 14 23:45:45.533: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 14 23:45:45.533: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 14 23:45:55.607: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 14 23:46:05.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3806 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 14 23:46:05.875: INFO: stderr: "I0714 23:46:05.797867 297 log.go:181] (0xc000a3cfd0) (0xc000d9a780) Create stream\nI0714 23:46:05.797918 297 log.go:181] (0xc000a3cfd0) (0xc000d9a780) Stream added, broadcasting: 1\nI0714 23:46:05.803476 297 log.go:181] (0xc000a3cfd0) Reply frame received for 1\nI0714 23:46:05.803510 297 log.go:181] (0xc000a3cfd0) (0xc00088b220) Create stream\nI0714 23:46:05.803520 297 log.go:181] (0xc000a3cfd0) (0xc00088b220) Stream added, broadcasting: 3\nI0714 23:46:05.804696 297 log.go:181] (0xc000a3cfd0) Reply frame received for 3\nI0714 23:46:05.804814 297 log.go:181] (0xc000a3cfd0) (0xc0006c4780) Create stream\nI0714 23:46:05.804839 297 log.go:181] (0xc000a3cfd0) (0xc0006c4780) Stream added, broadcasting: 5\nI0714 23:46:05.806052 297 log.go:181] (0xc000a3cfd0) Reply frame received for 5\nI0714 23:46:05.866104 297 log.go:181] (0xc000a3cfd0) Data frame received for 3\nI0714 23:46:05.866157 297 log.go:181] (0xc00088b220) (3) Data frame handling\nI0714 23:46:05.866175 297 log.go:181] (0xc00088b220) (3) Data frame sent\nI0714 23:46:05.866203 297 log.go:181] (0xc000a3cfd0) Data frame received for 5\nI0714 23:46:05.866220 297 log.go:181] (0xc0006c4780) (5) Data frame handling\nI0714 23:46:05.866240 297 log.go:181] (0xc0006c4780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0714 23:46:05.866254 297 log.go:181] (0xc000a3cfd0) Data frame received for 5\nI0714 23:46:05.866264 297 log.go:181] (0xc0006c4780) (5) Data frame handling\nI0714 23:46:05.866283 297 log.go:181] (0xc000a3cfd0) Data frame received for 3\nI0714 23:46:05.866301 297 log.go:181] (0xc00088b220) (3) Data frame handling\nI0714 23:46:05.868262 297 log.go:181] (0xc000a3cfd0) Data frame received for 1\nI0714 23:46:05.868292 297 log.go:181] (0xc000d9a780) (1) Data frame handling\nI0714 23:46:05.868327 297 log.go:181] (0xc000d9a780) (1) Data frame sent\nI0714 23:46:05.868562 297 log.go:181] (0xc000a3cfd0) (0xc000d9a780) Stream removed, broadcasting: 1\nI0714 23:46:05.868613 297 log.go:181] (0xc000a3cfd0) Go away received\nI0714 23:46:05.869210 297 log.go:181] (0xc000a3cfd0) (0xc000d9a780) Stream removed, broadcasting: 1\nI0714 23:46:05.869244 297 log.go:181] (0xc000a3cfd0) (0xc00088b220) Stream removed, broadcasting: 3\nI0714 23:46:05.869258 297 log.go:181] (0xc000a3cfd0) (0xc0006c4780) Stream removed, broadcasting: 5\n" Jul 14 23:46:05.875: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 14 23:46:05.875: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 14 23:46:15.898: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:46:15.898: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 14 23:46:15.898: INFO: Waiting for Pod statefulset-3806/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 14 23:46:15.898: INFO: Waiting for Pod statefulset-3806/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 14 23:46:25.906: INFO: Waiting for StatefulSet statefulset-3806/ss2 to complete update Jul 14 23:46:25.906: INFO: Waiting for Pod statefulset-3806/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 14 23:46:35.906: INFO: Deleting all statefulset in ns statefulset-3806 Jul 14 23:46:35.909: INFO: Scaling statefulset ss2 to 0 Jul 14 23:47:05.926: INFO: Waiting for statefulset status.replicas updated to 0 Jul 14 23:47:05.930: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:47:05.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3806" for this suite. • [SLOW TEST:171.458 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":294,"completed":52,"skipped":922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:47:05.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jul 14 23:47:06.545: INFO: created pod pod-service-account-defaultsa Jul 14 23:47:06.545: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 14 23:47:06.566: INFO: created pod pod-service-account-mountsa Jul 14 23:47:06.566: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 14 23:47:06.603: INFO: created pod pod-service-account-nomountsa Jul 14 23:47:06.603: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 14 23:47:06.636: INFO: created pod pod-service-account-defaultsa-mountspec Jul 14 23:47:06.636: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 14 23:47:06.675: INFO: created pod pod-service-account-mountsa-mountspec Jul 14 23:47:06.675: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 14 23:47:06.741: INFO: created pod pod-service-account-nomountsa-mountspec Jul 14 23:47:06.741: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 14 23:47:06.754: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 14 23:47:06.754: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 14 23:47:06.796: INFO: created pod pod-service-account-mountsa-nomountspec Jul 14 23:47:06.796: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 14 23:47:06.828: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 14 23:47:06.828: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:47:06.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6386" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":294,"completed":53,"skipped":967,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:47:06.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-cf3574e8-6b24-4df8-a4ad-28da7c4411f4 STEP: Creating a pod to test consume secrets Jul 14 23:47:07.113: INFO: Waiting up to 5m0s for pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b" in namespace "secrets-7613" to be "Succeeded or Failed" Jul 14 23:47:07.165: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.746152ms Jul 14 23:47:09.393: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280731123s Jul 14 23:47:11.591: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478665152s Jul 14 23:47:13.783: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.669935158s Jul 14 23:47:15.927: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813955663s Jul 14 23:47:18.152: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.039145177s Jul 14 23:47:20.196: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.083232597s STEP: Saw pod success Jul 14 23:47:20.196: INFO: Pod "pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b" satisfied condition "Succeeded or Failed" Jul 14 23:47:20.447: INFO: Trying to get logs from node latest-worker pod pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b container secret-volume-test: STEP: delete the pod Jul 14 23:47:20.940: INFO: Waiting for pod pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b to disappear Jul 14 23:47:20.985: INFO: Pod pod-secrets-c3a9d2c7-0f74-4de1-9ab1-818a1982d57b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:47:20.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7613" for this suite. • [SLOW TEST:14.491 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":54,"skipped":982,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:47:21.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3006, will wait for the garbage collector to delete the pods Jul 14 23:47:27.901: INFO: Deleting Job.batch foo took: 6.674272ms Jul 14 23:47:28.001: INFO: Terminating Job.batch foo pods took: 100.286295ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:09.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3006" for this suite. • [SLOW TEST:47.871 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":294,"completed":55,"skipped":1003,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:09.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7960861d-6f80-4375-acda-88bf34e23322 STEP: Creating a pod to test consume secrets Jul 14 23:48:09.430: INFO: Waiting up to 5m0s for pod "pod-secrets-08874dbb-460b-4912-953b-e783256080de" in namespace "secrets-3875" to be "Succeeded or Failed" Jul 14 23:48:09.462: INFO: Pod "pod-secrets-08874dbb-460b-4912-953b-e783256080de": Phase="Pending", Reason="", readiness=false. Elapsed: 31.657109ms Jul 14 23:48:11.467: INFO: Pod "pod-secrets-08874dbb-460b-4912-953b-e783256080de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036214818s Jul 14 23:48:13.490: INFO: Pod "pod-secrets-08874dbb-460b-4912-953b-e783256080de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059487759s STEP: Saw pod success Jul 14 23:48:13.490: INFO: Pod "pod-secrets-08874dbb-460b-4912-953b-e783256080de" satisfied condition "Succeeded or Failed" Jul 14 23:48:13.493: INFO: Trying to get logs from node latest-worker pod pod-secrets-08874dbb-460b-4912-953b-e783256080de container secret-volume-test: STEP: delete the pod Jul 14 23:48:13.525: INFO: Waiting for pod pod-secrets-08874dbb-460b-4912-953b-e783256080de to disappear Jul 14 23:48:13.537: INFO: Pod pod-secrets-08874dbb-460b-4912-953b-e783256080de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:13.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3875" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":56,"skipped":1010,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:13.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 14 23:48:18.124: INFO: &Pod{ObjectMeta:{send-events-30592df5-2cf7-434d-8415-a5bf19eaba78 events-7752 /api/v1/namespaces/events-7752/pods/send-events-30592df5-2cf7-434d-8415-a5bf19eaba78 ff7e7bb3-9798-4759-909f-e82ab03fc12f 1216215 0 2020-07-14 23:48:14 +0000 UTC map[name:foo time:92854181] map[] [] [] [{e2e.test Update v1 2020-07-14 23:48:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-14 23:48:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2x7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2x7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2x7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.240,StartTime:2020-07-14 23:48:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-14 23:48:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://54163f77372cb3b0d35b61feaace8be9c8ee98355087fd055120b18bf8328269,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 14 23:48:20.129: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 14 23:48:22.133: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:22.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7752" for this suite. • [SLOW TEST:8.673 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":294,"completed":57,"skipped":1012,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:22.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:26.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4955" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":294,"completed":58,"skipped":1020,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:26.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1602 STEP: creating service affinity-clusterip in namespace services-1602 STEP: creating replication controller affinity-clusterip in namespace services-1602 I0714 23:48:26.501338 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1602, replica count: 3 I0714 23:48:29.552088 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0714 23:48:32.552302 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0714 23:48:35.552533 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 14 23:48:35.558: INFO: Creating new exec pod Jul 14 23:48:40.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1602 execpod-affinityw4fbs -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jul 14 23:48:40.834: INFO: stderr: "I0714 23:48:40.741592 315 log.go:181] (0xc000e2b130) (0xc000bff860) Create stream\nI0714 23:48:40.741655 315 log.go:181] (0xc000e2b130) (0xc000bff860) Stream added, broadcasting: 1\nI0714 23:48:40.746635 315 log.go:181] (0xc000e2b130) Reply frame received for 1\nI0714 23:48:40.746666 315 log.go:181] (0xc000e2b130) (0xc000298dc0) Create stream\nI0714 23:48:40.746675 315 log.go:181] (0xc000e2b130) (0xc000298dc0) Stream added, broadcasting: 3\nI0714 23:48:40.747612 315 log.go:181] (0xc000e2b130) Reply frame received for 3\nI0714 23:48:40.747662 315 log.go:181] (0xc000e2b130) (0xc0004841e0) Create stream\nI0714 23:48:40.747679 315 log.go:181] (0xc000e2b130) (0xc0004841e0) Stream added, broadcasting: 5\nI0714 23:48:40.748429 315 log.go:181] (0xc000e2b130) Reply frame received for 5\nI0714 23:48:40.811089 315 log.go:181] (0xc000e2b130) Data frame received for 5\nI0714 23:48:40.811111 315 log.go:181] (0xc0004841e0) (5) Data frame handling\nI0714 23:48:40.811119 315 log.go:181] (0xc0004841e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0714 23:48:40.826084 315 log.go:181] (0xc000e2b130) Data frame received for 5\nI0714 23:48:40.826124 315 log.go:181] (0xc0004841e0) (5) Data frame handling\nI0714 23:48:40.826154 315 log.go:181] (0xc0004841e0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0714 23:48:40.826266 315 log.go:181] (0xc000e2b130) Data frame received for 5\nI0714 23:48:40.826307 315 log.go:181] (0xc0004841e0) (5) Data frame handling\nI0714 23:48:40.826690 315 log.go:181] (0xc000e2b130) Data frame received for 3\nI0714 23:48:40.826722 315 log.go:181] (0xc000298dc0) (3) Data frame handling\nI0714 23:48:40.828425 315 log.go:181] (0xc000e2b130) Data frame received for 1\nI0714 23:48:40.828448 315 log.go:181] (0xc000bff860) (1) Data frame handling\nI0714 23:48:40.828462 315 log.go:181] (0xc000bff860) (1) Data frame sent\nI0714 23:48:40.828478 315 log.go:181] (0xc000e2b130) (0xc000bff860) Stream removed, broadcasting: 1\nI0714 23:48:40.828492 315 log.go:181] (0xc000e2b130) Go away received\nI0714 23:48:40.829001 315 log.go:181] (0xc000e2b130) (0xc000bff860) Stream removed, broadcasting: 1\nI0714 23:48:40.829025 315 log.go:181] (0xc000e2b130) (0xc000298dc0) Stream removed, broadcasting: 3\nI0714 23:48:40.829038 315 log.go:181] (0xc000e2b130) (0xc0004841e0) Stream removed, broadcasting: 5\n" Jul 14 23:48:40.834: INFO: stdout: "" Jul 14 23:48:40.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1602 execpod-affinityw4fbs -- /bin/sh -x -c nc -zv -t -w 2 10.108.6.152 80' Jul 14 23:48:41.031: INFO: stderr: "I0714 23:48:40.958799 333 log.go:181] (0xc0008acf20) (0xc000e8e320) Create stream\nI0714 23:48:40.958845 333 log.go:181] (0xc0008acf20) (0xc000e8e320) Stream added, broadcasting: 1\nI0714 23:48:40.964816 333 log.go:181] (0xc0008acf20) Reply frame received for 1\nI0714 23:48:40.964871 333 log.go:181] (0xc0008acf20) (0xc000bfa320) Create stream\nI0714 23:48:40.964885 333 log.go:181] (0xc0008acf20) (0xc000bfa320) Stream added, broadcasting: 3\nI0714 23:48:40.965924 333 log.go:181] (0xc0008acf20) Reply frame received for 3\nI0714 23:48:40.965962 333 log.go:181] (0xc0008acf20) (0xc000aa2820) Create stream\nI0714 23:48:40.965974 333 log.go:181] (0xc0008acf20) (0xc000aa2820) Stream added, broadcasting: 5\nI0714 23:48:40.966900 333 log.go:181] (0xc0008acf20) Reply frame received for 5\nI0714 23:48:41.024714 333 log.go:181] (0xc0008acf20) Data frame received for 5\nI0714 23:48:41.024837 333 log.go:181] (0xc000aa2820) (5) Data frame handling\nI0714 23:48:41.024855 333 log.go:181] (0xc000aa2820) (5) Data frame sent\nI0714 23:48:41.024862 333 log.go:181] (0xc0008acf20) Data frame received for 5\nI0714 23:48:41.024868 333 log.go:181] (0xc000aa2820) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.6.152 80\nConnection to 10.108.6.152 80 port [tcp/http] succeeded!\nI0714 23:48:41.024918 333 log.go:181] (0xc0008acf20) Data frame received for 3\nI0714 23:48:41.024964 333 log.go:181] (0xc000bfa320) (3) Data frame handling\nI0714 23:48:41.026133 333 log.go:181] (0xc0008acf20) Data frame received for 1\nI0714 23:48:41.026154 333 log.go:181] (0xc000e8e320) (1) Data frame handling\nI0714 23:48:41.026168 333 log.go:181] (0xc000e8e320) (1) Data frame sent\nI0714 23:48:41.026184 333 log.go:181] (0xc0008acf20) (0xc000e8e320) Stream removed, broadcasting: 1\nI0714 23:48:41.026197 333 log.go:181] (0xc0008acf20) Go away received\nI0714 23:48:41.026671 333 log.go:181] (0xc0008acf20) (0xc000e8e320) Stream removed, broadcasting: 1\nI0714 23:48:41.026704 333 log.go:181] (0xc0008acf20) (0xc000bfa320) Stream removed, broadcasting: 3\nI0714 23:48:41.026723 333 log.go:181] (0xc0008acf20) (0xc000aa2820) Stream removed, broadcasting: 5\n" Jul 14 23:48:41.031: INFO: stdout: "" Jul 14 23:48:41.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1602 execpod-affinityw4fbs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.6.152:80/ ; done' Jul 14 23:48:41.448: INFO: stderr: "I0714 23:48:41.173900 352 log.go:181] (0xc0006d60b0) (0xc000c73900) Create stream\nI0714 23:48:41.173979 352 log.go:181] (0xc0006d60b0) (0xc000c73900) Stream added, broadcasting: 1\nI0714 23:48:41.178648 352 log.go:181] (0xc0006d60b0) Reply frame received for 1\nI0714 23:48:41.178694 352 log.go:181] (0xc0006d60b0) (0xc000a33040) Create stream\nI0714 23:48:41.178709 352 log.go:181] (0xc0006d60b0) (0xc000a33040) Stream added, broadcasting: 3\nI0714 23:48:41.179770 352 log.go:181] (0xc0006d60b0) Reply frame received for 3\nI0714 23:48:41.179805 352 log.go:181] (0xc0006d60b0) (0xc000a28820) Create stream\nI0714 23:48:41.179817 352 log.go:181] (0xc0006d60b0) (0xc000a28820) Stream added, broadcasting: 5\nI0714 23:48:41.180948 352 log.go:181] (0xc0006d60b0) Reply frame received for 5\nI0714 23:48:41.235940 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.235975 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.235984 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.235996 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.236001 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.236007 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.361263 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.361297 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.361323 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.362109 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.362134 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.362146 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.362160 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.362168 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.362177 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.368905 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.368931 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.368947 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.369541 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.369568 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.369585 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0714 23:48:41.370643 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.370672 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.370682 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.370697 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.370713 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.370721 352 log.go:181] (0xc000a28820) (5) Data frame sent\n 2 http://10.108.6.152:80/\nI0714 23:48:41.373402 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.373420 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.373438 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.374354 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.374385 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.374402 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.374427 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.374439 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.374461 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.379395 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.379418 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.379443 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.379824 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.379838 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.379844 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.379904 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.379915 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.379921 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.384054 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.384073 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.384091 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.384578 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.384612 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.384627 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.384644 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.384653 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.384663 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.388223 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.388239 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.388251 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.388825 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.388849 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.388857 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.388879 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.388898 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.388915 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.388927 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.388939 352 log.go:181] (0xc000a28820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.388960 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.393493 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.393523 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.393542 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.393958 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.393977 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.393994 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.394004 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.394014 352 log.go:181] (0xc000a28820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.394041 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.394117 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.394132 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.394144 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.397748 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.397767 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.397788 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.398456 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.398491 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.398510 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.398530 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.398549 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.398570 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.403299 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.403312 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.403320 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.403888 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.403907 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.403916 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.403927 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.403935 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.403947 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.410082 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.410117 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.410127 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.410295 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.410310 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.410320 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.417229 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.417245 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.417263 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.417574 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.417585 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.417591 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.417605 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.417615 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.417627 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.422445 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.422464 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.422478 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.422988 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.423022 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.423040 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.423061 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.423074 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.423092 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.426618 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.426631 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.426640 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.427095 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.427131 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.427149 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.427169 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.427186 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.427205 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.430505 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.430531 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.430550 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.430930 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.430957 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.430979 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.430994 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.431009 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.431019 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.431026 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.431039 352 log.go:181] (0xc000a28820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.431077 352 log.go:181] (0xc000a28820) (5) Data frame sent\nI0714 23:48:41.435235 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.435256 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.435274 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.435800 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.435831 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.435843 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.435861 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.435870 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.435887 352 log.go:181] (0xc000a28820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.6.152:80/\nI0714 23:48:41.441154 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.441177 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.441201 352 log.go:181] (0xc000a33040) (3) Data frame sent\nI0714 23:48:41.441945 352 log.go:181] (0xc0006d60b0) Data frame received for 3\nI0714 23:48:41.441960 352 log.go:181] (0xc000a33040) (3) Data frame handling\nI0714 23:48:41.442351 352 log.go:181] (0xc0006d60b0) Data frame received for 5\nI0714 23:48:41.442361 352 log.go:181] (0xc000a28820) (5) Data frame handling\nI0714 23:48:41.443791 352 log.go:181] (0xc0006d60b0) Data frame received for 1\nI0714 23:48:41.443815 352 log.go:181] (0xc000c73900) (1) Data frame handling\nI0714 23:48:41.443833 352 log.go:181] (0xc000c73900) (1) Data frame sent\nI0714 23:48:41.443848 352 log.go:181] (0xc0006d60b0) (0xc000c73900) Stream removed, broadcasting: 1\nI0714 23:48:41.443955 352 log.go:181] (0xc0006d60b0) Go away received\nI0714 23:48:41.444200 352 log.go:181] (0xc0006d60b0) (0xc000c73900) Stream removed, broadcasting: 1\nI0714 23:48:41.444216 352 log.go:181] (0xc0006d60b0) (0xc000a33040) Stream removed, broadcasting: 3\nI0714 23:48:41.444229 352 log.go:181] (0xc0006d60b0) (0xc000a28820) Stream removed, broadcasting: 5\n" Jul 14 23:48:41.448: INFO: stdout: "\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg\naffinity-clusterip-zt9wg" Jul 14 23:48:41.448: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.448: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.448: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.448: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Received response from host: affinity-clusterip-zt9wg Jul 14 23:48:41.449: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1602, will wait for the garbage collector to delete the pods Jul 14 23:48:41.600: INFO: Deleting ReplicationController affinity-clusterip took: 24.859624ms Jul 14 23:48:42.000: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.21039ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:49.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1602" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:22.871 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":59,"skipped":1037,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:49.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:48:49.302: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 14 23:48:49.428: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 14 23:48:54.431: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 14 23:48:54.431: INFO: Creating deployment "test-rolling-update-deployment" Jul 14 23:48:54.436: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 14 23:48:54.502: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 14 23:48:56.589: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 14 23:48:56.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367334, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367334, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367334, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367334, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8597df97cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:48:58.605: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 14 23:48:58.615: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7441 /apis/apps/v1/namespaces/deployment-7441/deployments/test-rolling-update-deployment f2893c62-4219-4b32-a738-52f811aaeeec 1216554 1 2020-07-14 23:48:54 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-07-14 23:48:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-14 23:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00391b268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-14 23:48:54 +0000 UTC,LastTransitionTime:2020-07-14 23:48:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8597df97cd" has successfully progressed.,LastUpdateTime:2020-07-14 23:48:57 +0000 UTC,LastTransitionTime:2020-07-14 23:48:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 14 23:48:58.618: INFO: New ReplicaSet "test-rolling-update-deployment-8597df97cd" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8597df97cd deployment-7441 /apis/apps/v1/namespaces/deployment-7441/replicasets/test-rolling-update-deployment-8597df97cd c85881a9-6b99-496c-aa03-186bc4ce2ea1 1216543 1 2020-07-14 23:48:54 +0000 UTC map[name:sample-pod pod-template-hash:8597df97cd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f2893c62-4219-4b32-a738-52f811aaeeec 0xc00077ec17 0xc00077ec18}] [] [{kube-controller-manager Update apps/v1 2020-07-14 23:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2893c62-4219-4b32-a738-52f811aaeeec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8597df97cd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:8597df97cd] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00077eca8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:48:58.618: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 14 23:48:58.618: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7441 /apis/apps/v1/namespaces/deployment-7441/replicasets/test-rolling-update-controller 8b46384e-c5ee-436c-aaf3-c257a3c32e22 1216553 2 2020-07-14 23:48:49 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f2893c62-4219-4b32-a738-52f811aaeeec 0xc00077eacf 0xc00077eae0}] [] [{e2e.test Update apps/v1 2020-07-14 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-14 23:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2893c62-4219-4b32-a738-52f811aaeeec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00077eba8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 14 23:48:58.621: INFO: Pod "test-rolling-update-deployment-8597df97cd-2wmrf" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8597df97cd-2wmrf test-rolling-update-deployment-8597df97cd- deployment-7441 /api/v1/namespaces/deployment-7441/pods/test-rolling-update-deployment-8597df97cd-2wmrf 2917f08a-0a46-4821-ba38-83251da293b3 1216542 0 2020-07-14 23:48:54 +0000 UTC map[name:sample-pod pod-template-hash:8597df97cd] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8597df97cd c85881a9-6b99-496c-aa03-186bc4ce2ea1 0xc00077f167 0xc00077f168}] [] [{kube-controller-manager Update v1 2020-07-14 23:48:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85881a9-6b99-496c-aa03-186bc4ce2ea1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-14 23:48:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wlpv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wlpv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wlpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-14 23:48:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.244,StartTime:2020-07-14 23:48:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-14 23:48:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://8f5ba1939188b11e99794e59dd9cda55f95b902092c47cc7b175265dd5b867bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:48:58.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7441" for this suite. • [SLOW TEST:9.397 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":60,"skipped":1047,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:48:58.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-072a364a-2a03-4340-8d3a-7f47a6a7cd1f STEP: Creating a pod to test consume configMaps Jul 14 23:48:58.708: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250" in namespace "projected-4462" to be "Succeeded or Failed" Jul 14 23:48:58.719: INFO: Pod "pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250": Phase="Pending", Reason="", readiness=false. Elapsed: 11.170736ms Jul 14 23:49:00.724: INFO: Pod "pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016045695s Jul 14 23:49:02.736: INFO: Pod "pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027998718s STEP: Saw pod success Jul 14 23:49:02.736: INFO: Pod "pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250" satisfied condition "Succeeded or Failed" Jul 14 23:49:02.738: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250 container projected-configmap-volume-test: STEP: delete the pod Jul 14 23:49:02.834: INFO: Waiting for pod pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250 to disappear Jul 14 23:49:02.898: INFO: Pod pod-projected-configmaps-f6e52e36-37b3-4ada-a173-cb32e65c5250 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:49:02.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4462" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":61,"skipped":1051,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:49:02.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2db3a261-5cf1-40b0-b9c7-36c441cf86f7 STEP: Creating a pod to test consume configMaps Jul 14 23:49:03.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb" in namespace "projected-5441" to be "Succeeded or Failed" Jul 14 23:49:03.194: INFO: Pod "pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.631411ms Jul 14 23:49:05.269: INFO: Pod "pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094370341s Jul 14 23:49:07.273: INFO: Pod "pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098566701s STEP: Saw pod success Jul 14 23:49:07.273: INFO: Pod "pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb" satisfied condition "Succeeded or Failed" Jul 14 23:49:07.277: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb container projected-configmap-volume-test: STEP: delete the pod Jul 14 23:49:07.310: INFO: Waiting for pod pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb to disappear Jul 14 23:49:07.317: INFO: Pod pod-projected-configmaps-9f5cd16b-c617-450d-8556-96b6b8eefebb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:49:07.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5441" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":62,"skipped":1070,"failed":0} SSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:49:07.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis Jul 14 23:49:07.882: FAIL: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func2.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 +0x7c7 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x337 k8s.io/kubernetes/test/e2e.TestE2E(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000ef2240, 0x4cc3740) /usr/local/go/src/testing/testing.go:991 +0xdc created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1042 +0x357 [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "certificates-3370". STEP: Found 0 events. Jul 14 23:49:07.890: INFO: POD NODE PHASE GRACE CONDITIONS Jul 14 23:49:07.890: INFO: Jul 14 23:49:07.894: INFO: Logging node info for node latest-control-plane Jul 14 23:49:07.896: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane fab71f49-3955-4070-ba3f-a34ab7dbcb1f 1214513 0 2020-07-10 10:29:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-07-14 23:44:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:44:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:44:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:44:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:44:10 +0000 UTC,LastTransitionTime:2020-07-10 10:30:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08e3d1af94e64c419f74d6afa70f0d43,SystemUUID:b2b9a347-3d8a-409e-9c43-3d2f455385e1,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:49:07.897: INFO: Logging kubelet events for node latest-control-plane Jul 14 23:49:07.899: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jul 14 23:49:07.920: INFO: kube-proxy-bvnbl started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:49:07.920: INFO: coredns-66bff467f8-lkg9r started at 2020-07-10 10:30:12 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container coredns ready: true, restart count 0 Jul 14 23:49:07.920: INFO: etcd-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container etcd ready: true, restart count 0 Jul 14 23:49:07.920: INFO: kube-scheduler-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container kube-scheduler ready: true, restart count 1 Jul 14 23:49:07.920: INFO: kindnet-6gzv5 started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:49:07.920: INFO: local-path-provisioner-67795f75bd-wdgcp started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 14 23:49:07.920: INFO: kube-apiserver-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container kube-apiserver ready: true, restart count 0 Jul 14 23:49:07.920: INFO: kube-controller-manager-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 14 23:49:07.920: INFO: coredns-66bff467f8-xqch9 started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:07.920: INFO: Container coredns ready: true, restart count 0 W0714 23:49:07.924415 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:49:07.987: INFO: Latency metrics for node latest-control-plane Jul 14 23:49:07.987: INFO: Logging node info for node latest-worker Jul 14 23:49:07.990: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker ee905599-6d86-471c-8264-80d61eb4d02f 1215475 0 2020-07-10 10:30:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-07-10 10:30:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2020-07-14 00:28:07 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2020-07-14 23:46:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:25 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:25 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:25 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:46:25 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:469a70212bc546bfb73ddea4d8686893,SystemUUID:ff574bf8-eaa0-484e-9d22-817c6038d2e3,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:49:07.990: INFO: Logging kubelet events for node latest-worker Jul 14 23:49:08.012: INFO: Logging pods the kubelet thinks is on node latest-worker Jul 14 23:49:08.016: INFO: kindnet-qt4jk started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:08.016: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:49:08.016: INFO: kube-proxy-xb9q4 started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:08.016: INFO: Container kube-proxy ready: true, restart count 0 W0714 23:49:08.020429 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:49:08.051: INFO: Latency metrics for node latest-worker Jul 14 23:49:08.051: INFO: Logging node info for node latest-worker2 Jul 14 23:49:08.056: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0ed4e844-533c-4115-b90e-6070300ff379 1215468 0 2020-07-10 10:30:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-07-14 23:46:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:23 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:23 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:46:23 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:46:23 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58abb20e7a0b4d058f79f995dc3b2d92,SystemUUID:a7355a65-57ac-4117-ae3f-f79ca388e0d4,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:49:08.056: INFO: Logging kubelet events for node latest-worker2 Jul 14 23:49:08.059: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jul 14 23:49:08.062: INFO: kube-proxy-s596l started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:08.062: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:49:08.062: INFO: kindnet-gkkxx started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 14 23:49:08.062: INFO: Container kindnet-cni ready: true, restart count 0 W0714 23:49:08.091369 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:49:08.123: INFO: Latency metrics for node latest-worker2 Jul 14 23:49:08.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-3370" for this suite. • Failure [0.821 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:49:07.883: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 ------------------------------ {"msg":"FAILED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":294,"completed":62,"skipped":1073,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:49:08.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803 Jul 14 23:49:08.251: INFO: Pod name my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803: Found 0 pods out of 1 Jul 14 23:49:13.257: INFO: Pod name my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803: Found 1 pods out of 1 Jul 14 23:49:13.257: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803" are running Jul 14 23:49:13.260: INFO: Pod "my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803-bg89w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-14 23:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-14 23:49:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-14 23:49:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-14 23:49:08 +0000 UTC Reason: Message:}]) Jul 14 23:49:13.260: INFO: Trying to dial the pod Jul 14 23:49:18.273: INFO: Controller my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803: Got expected result from replica 1 [my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803-bg89w]: "my-hostname-basic-64fb9a1f-ab91-4516-a004-5b3a52e4f803-bg89w", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:49:18.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5235" for this suite. • [SLOW TEST:10.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":63,"skipped":1078,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:49:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 14 23:49:18.392: INFO: Waiting up to 1m0s for all nodes to be ready Jul 14 23:50:18.416: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:50:18.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jul 14 23:50:22.545: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:50:36.824: INFO: pods created so far: [1 1 1] Jul 14 23:50:36.824: INFO: length of pods created so far: 3 Jul 14 23:50:44.832: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:50:51.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7588" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:50:51.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9254" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:93.726 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":294,"completed":64,"skipped":1095,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:50:52.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 14 23:50:52.141: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:52.145: INFO: Number of nodes with available pods: 0 Jul 14 23:50:52.145: INFO: Node latest-worker is running more than one daemon pod Jul 14 23:50:53.260: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:53.264: INFO: Number of nodes with available pods: 0 Jul 14 23:50:53.264: INFO: Node latest-worker is running more than one daemon pod Jul 14 23:50:54.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:54.154: INFO: Number of nodes with available pods: 0 Jul 14 23:50:54.154: INFO: Node latest-worker is running more than one daemon pod Jul 14 23:50:55.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:55.252: INFO: Number of nodes with available pods: 0 Jul 14 23:50:55.252: INFO: Node latest-worker is running more than one daemon pod Jul 14 23:50:56.151: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:56.154: INFO: Number of nodes with available pods: 1 Jul 14 23:50:56.154: INFO: Node latest-worker is running more than one daemon pod Jul 14 23:50:57.405: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:57.465: INFO: Number of nodes with available pods: 2 Jul 14 23:50:57.465: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 14 23:50:57.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:57.967: INFO: Number of nodes with available pods: 1 Jul 14 23:50:57.967: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:50:58.971: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:58.985: INFO: Number of nodes with available pods: 1 Jul 14 23:50:58.985: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:50:59.971: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:50:59.975: INFO: Number of nodes with available pods: 1 Jul 14 23:50:59.975: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:51:00.972: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:51:00.976: INFO: Number of nodes with available pods: 1 Jul 14 23:51:00.976: INFO: Node latest-worker2 is running more than one daemon pod Jul 14 23:51:01.972: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 14 23:51:01.976: INFO: Number of nodes with available pods: 2 Jul 14 23:51:01.976: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8579, will wait for the garbage collector to delete the pods Jul 14 23:51:02.044: INFO: Deleting DaemonSet.extensions daemon-set took: 8.452638ms Jul 14 23:51:02.344: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.287371ms Jul 14 23:51:09.247: INFO: Number of nodes with available pods: 0 Jul 14 23:51:09.247: INFO: Number of running nodes: 0, number of available pods: 0 Jul 14 23:51:09.250: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8579/daemonsets","resourceVersion":"1217290"},"items":null} Jul 14 23:51:09.253: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8579/pods","resourceVersion":"1217290"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:09.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8579" for this suite. • [SLOW TEST:17.262 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":294,"completed":65,"skipped":1095,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:09.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:20.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4805" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":294,"completed":66,"skipped":1110,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:20.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:51:21.108: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:51:23.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367481, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367481, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367481, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367481, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:51:26.162: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:51:26.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5368-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:27.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-785" for this suite. STEP: Destroying namespace "webhook-785-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":294,"completed":67,"skipped":1111,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:27.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 14 23:51:32.495: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:32.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6187" for this suite. • [SLOW TEST:5.316 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":294,"completed":68,"skipped":1112,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:32.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1031.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1031.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1031.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1031.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1031.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1031.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 14 23:51:40.905: INFO: DNS probes using dns-1031/dns-test-4d3aadef-b7ea-4caf-b316-c8ae6b026530 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:40.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1031" for this suite. • [SLOW TEST:8.313 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":294,"completed":69,"skipped":1123,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:41.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 14 23:51:41.371: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:48.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5983" for this suite. • [SLOW TEST:8.020 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":294,"completed":70,"skipped":1145,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:49.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:51:49.130: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-75b3c51b-be8a-4267-af26-7466b82602b8" in namespace "security-context-test-8340" to be "Succeeded or Failed" Jul 14 23:51:49.144: INFO: Pod "busybox-readonly-false-75b3c51b-be8a-4267-af26-7466b82602b8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.341383ms Jul 14 23:51:51.148: INFO: Pod "busybox-readonly-false-75b3c51b-be8a-4267-af26-7466b82602b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017419563s Jul 14 23:51:53.311: INFO: Pod "busybox-readonly-false-75b3c51b-be8a-4267-af26-7466b82602b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180983203s Jul 14 23:51:53.311: INFO: Pod "busybox-readonly-false-75b3c51b-be8a-4267-af26-7466b82602b8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:51:53.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8340" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":294,"completed":71,"skipped":1148,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:51:53.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:51:53.623: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 14 23:51:56.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 create -f -' Jul 14 23:52:00.313: INFO: stderr: "" Jul 14 23:52:00.313: INFO: stdout: "e2e-test-crd-publish-openapi-977-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 14 23:52:00.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 delete e2e-test-crd-publish-openapi-977-crds test-cr' Jul 14 23:52:00.451: INFO: stderr: "" Jul 14 23:52:00.451: INFO: stdout: "e2e-test-crd-publish-openapi-977-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jul 14 23:52:00.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 apply -f -' Jul 14 23:52:00.768: INFO: stderr: "" Jul 14 23:52:00.768: INFO: stdout: "e2e-test-crd-publish-openapi-977-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jul 14 23:52:00.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5501 delete e2e-test-crd-publish-openapi-977-crds test-cr' Jul 14 23:52:00.897: INFO: stderr: "" Jul 14 23:52:00.897: INFO: stdout: "e2e-test-crd-publish-openapi-977-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 14 23:52:00.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-977-crds' Jul 14 23:52:01.168: INFO: stderr: "" Jul 14 23:52:01.168: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-977-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:03.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5501" for this suite. • [SLOW TEST:9.751 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":294,"completed":72,"skipped":1151,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:03.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 14 23:52:03.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4691' Jul 14 23:52:03.592: INFO: stderr: "" Jul 14 23:52:03.592: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 14 23:52:03.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:03.745: INFO: stderr: "" Jul 14 23:52:03.745: INFO: stdout: "update-demo-nautilus-25stj update-demo-nautilus-n75ws " Jul 14 23:52:03.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25stj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:03.865: INFO: stderr: "" Jul 14 23:52:03.865: INFO: stdout: "" Jul 14 23:52:03.865: INFO: update-demo-nautilus-25stj is created but not running Jul 14 23:52:08.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:08.989: INFO: stderr: "" Jul 14 23:52:08.989: INFO: stdout: "update-demo-nautilus-25stj update-demo-nautilus-n75ws " Jul 14 23:52:08.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25stj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:09.087: INFO: stderr: "" Jul 14 23:52:09.087: INFO: stdout: "true" Jul 14 23:52:09.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25stj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:09.181: INFO: stderr: "" Jul 14 23:52:09.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 14 23:52:09.181: INFO: validating pod update-demo-nautilus-25stj Jul 14 23:52:09.185: INFO: got data: { "image": "nautilus.jpg" } Jul 14 23:52:09.185: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 14 23:52:09.185: INFO: update-demo-nautilus-25stj is verified up and running Jul 14 23:52:09.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:09.284: INFO: stderr: "" Jul 14 23:52:09.284: INFO: stdout: "true" Jul 14 23:52:09.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:09.397: INFO: stderr: "" Jul 14 23:52:09.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 14 23:52:09.397: INFO: validating pod update-demo-nautilus-n75ws Jul 14 23:52:09.401: INFO: got data: { "image": "nautilus.jpg" } Jul 14 23:52:09.401: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 14 23:52:09.401: INFO: update-demo-nautilus-n75ws is verified up and running STEP: scaling down the replication controller Jul 14 23:52:09.403: INFO: scanned /root for discovery docs: Jul 14 23:52:09.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4691' Jul 14 23:52:10.530: INFO: stderr: "" Jul 14 23:52:10.530: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 14 23:52:10.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:10.631: INFO: stderr: "" Jul 14 23:52:10.631: INFO: stdout: "update-demo-nautilus-25stj update-demo-nautilus-n75ws " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 14 23:52:15.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:15.750: INFO: stderr: "" Jul 14 23:52:15.750: INFO: stdout: "update-demo-nautilus-25stj update-demo-nautilus-n75ws " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 14 23:52:20.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:20.844: INFO: stderr: "" Jul 14 23:52:20.844: INFO: stdout: "update-demo-nautilus-n75ws " Jul 14 23:52:20.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:20.938: INFO: stderr: "" Jul 14 23:52:20.938: INFO: stdout: "true" Jul 14 23:52:20.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:21.033: INFO: stderr: "" Jul 14 23:52:21.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 14 23:52:21.033: INFO: validating pod update-demo-nautilus-n75ws Jul 14 23:52:21.037: INFO: got data: { "image": "nautilus.jpg" } Jul 14 23:52:21.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 14 23:52:21.037: INFO: update-demo-nautilus-n75ws is verified up and running STEP: scaling up the replication controller Jul 14 23:52:21.040: INFO: scanned /root for discovery docs: Jul 14 23:52:21.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4691' Jul 14 23:52:22.193: INFO: stderr: "" Jul 14 23:52:22.193: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 14 23:52:22.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:22.288: INFO: stderr: "" Jul 14 23:52:22.289: INFO: stdout: "update-demo-nautilus-2rhg8 update-demo-nautilus-n75ws " Jul 14 23:52:22.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rhg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:22.400: INFO: stderr: "" Jul 14 23:52:22.400: INFO: stdout: "" Jul 14 23:52:22.401: INFO: update-demo-nautilus-2rhg8 is created but not running Jul 14 23:52:27.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4691' Jul 14 23:52:27.512: INFO: stderr: "" Jul 14 23:52:27.512: INFO: stdout: "update-demo-nautilus-2rhg8 update-demo-nautilus-n75ws " Jul 14 23:52:27.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rhg8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:27.617: INFO: stderr: "" Jul 14 23:52:27.617: INFO: stdout: "true" Jul 14 23:52:27.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2rhg8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:27.719: INFO: stderr: "" Jul 14 23:52:27.719: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 14 23:52:27.719: INFO: validating pod update-demo-nautilus-2rhg8 Jul 14 23:52:27.723: INFO: got data: { "image": "nautilus.jpg" } Jul 14 23:52:27.723: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 14 23:52:27.723: INFO: update-demo-nautilus-2rhg8 is verified up and running Jul 14 23:52:27.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:27.829: INFO: stderr: "" Jul 14 23:52:27.829: INFO: stdout: "true" Jul 14 23:52:27.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n75ws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4691' Jul 14 23:52:27.950: INFO: stderr: "" Jul 14 23:52:27.950: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 14 23:52:27.950: INFO: validating pod update-demo-nautilus-n75ws Jul 14 23:52:27.954: INFO: got data: { "image": "nautilus.jpg" } Jul 14 23:52:27.954: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 14 23:52:27.954: INFO: update-demo-nautilus-n75ws is verified up and running STEP: using delete to clean up resources Jul 14 23:52:27.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4691' Jul 14 23:52:28.066: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 14 23:52:28.066: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 14 23:52:28.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4691' Jul 14 23:52:28.185: INFO: stderr: "No resources found in kubectl-4691 namespace.\n" Jul 14 23:52:28.185: INFO: stdout: "" Jul 14 23:52:28.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4691 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 14 23:52:28.287: INFO: stderr: "" Jul 14 23:52:28.287: INFO: stdout: "update-demo-nautilus-2rhg8\nupdate-demo-nautilus-n75ws\n" Jul 14 23:52:28.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4691' Jul 14 23:52:28.886: INFO: stderr: "No resources found in kubectl-4691 namespace.\n" Jul 14 23:52:28.886: INFO: stdout: "" Jul 14 23:52:28.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4691 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 14 23:52:28.985: INFO: stderr: "" Jul 14 23:52:28.985: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:28.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4691" for this suite. • [SLOW TEST:25.919 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":294,"completed":73,"skipped":1171,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:28.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:52:30.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:52:32.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367550, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367550, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367550, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367549, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:52:35.109: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:52:35.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3276-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:36.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-723" for this suite. STEP: Destroying namespace "webhook-723-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.369 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":294,"completed":74,"skipped":1171,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:36.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a0b50e3d-5a09-44a7-8f4f-b1db31210c60 STEP: Creating a pod to test consume secrets Jul 14 23:52:36.446: INFO: Waiting up to 5m0s for pod "pod-secrets-f0779964-c320-4a70-89fc-f38170244405" in namespace "secrets-6154" to be "Succeeded or Failed" Jul 14 23:52:36.482: INFO: Pod "pod-secrets-f0779964-c320-4a70-89fc-f38170244405": Phase="Pending", Reason="", readiness=false. Elapsed: 35.933701ms Jul 14 23:52:38.596: INFO: Pod "pod-secrets-f0779964-c320-4a70-89fc-f38170244405": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150084258s Jul 14 23:52:40.601: INFO: Pod "pod-secrets-f0779964-c320-4a70-89fc-f38170244405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154682412s STEP: Saw pod success Jul 14 23:52:40.601: INFO: Pod "pod-secrets-f0779964-c320-4a70-89fc-f38170244405" satisfied condition "Succeeded or Failed" Jul 14 23:52:40.604: INFO: Trying to get logs from node latest-worker pod pod-secrets-f0779964-c320-4a70-89fc-f38170244405 container secret-env-test: STEP: delete the pod Jul 14 23:52:40.669: INFO: Waiting for pod pod-secrets-f0779964-c320-4a70-89fc-f38170244405 to disappear Jul 14 23:52:40.677: INFO: Pod pod-secrets-f0779964-c320-4a70-89fc-f38170244405 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:40.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6154" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":294,"completed":75,"skipped":1179,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:40.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:40.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7266" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":294,"completed":76,"skipped":1204,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:40.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jul 14 23:52:40.927: INFO: Waiting up to 5m0s for pod "var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce" in namespace "var-expansion-6827" to be "Succeeded or Failed" Jul 14 23:52:40.949: INFO: Pod "var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce": Phase="Pending", Reason="", readiness=false. Elapsed: 21.216312ms Jul 14 23:52:42.953: INFO: Pod "var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025723367s Jul 14 23:52:44.958: INFO: Pod "var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030405672s STEP: Saw pod success Jul 14 23:52:44.958: INFO: Pod "var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce" satisfied condition "Succeeded or Failed" Jul 14 23:52:44.961: INFO: Trying to get logs from node latest-worker2 pod var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce container dapi-container: STEP: delete the pod Jul 14 23:52:44.991: INFO: Waiting for pod var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce to disappear Jul 14 23:52:45.009: INFO: Pod var-expansion-dbcb765e-7f11-4a42-aa73-2316199640ce no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:52:45.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6827" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":294,"completed":77,"skipped":1221,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:52:45.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4213 Jul 14 23:52:49.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 14 23:52:49.351: INFO: stderr: "I0714 23:52:49.272494 943 log.go:181] (0xc000976f20) (0xc000c59ae0) Create stream\nI0714 23:52:49.272557 943 log.go:181] (0xc000976f20) (0xc000c59ae0) Stream added, broadcasting: 1\nI0714 23:52:49.278503 943 log.go:181] (0xc000976f20) Reply frame received for 1\nI0714 23:52:49.278557 943 log.go:181] (0xc000976f20) (0xc0008112c0) Create stream\nI0714 23:52:49.278583 943 log.go:181] (0xc000976f20) (0xc0008112c0) Stream added, broadcasting: 3\nI0714 23:52:49.279658 943 log.go:181] (0xc000976f20) Reply frame received for 3\nI0714 23:52:49.279709 943 log.go:181] (0xc000976f20) (0xc000734b40) Create stream\nI0714 23:52:49.279733 943 log.go:181] (0xc000976f20) (0xc000734b40) Stream added, broadcasting: 5\nI0714 23:52:49.280855 943 log.go:181] (0xc000976f20) Reply frame received for 5\nI0714 23:52:49.339186 943 log.go:181] (0xc000976f20) Data frame received for 5\nI0714 23:52:49.339229 943 log.go:181] (0xc000734b40) (5) Data frame handling\nI0714 23:52:49.339252 943 log.go:181] (0xc000734b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0714 23:52:49.342807 943 log.go:181] (0xc000976f20) Data frame received for 3\nI0714 23:52:49.342855 943 log.go:181] (0xc0008112c0) (3) Data frame handling\nI0714 23:52:49.342889 943 log.go:181] (0xc0008112c0) (3) Data frame sent\nI0714 23:52:49.343407 943 log.go:181] (0xc000976f20) Data frame received for 5\nI0714 23:52:49.343439 943 log.go:181] (0xc000734b40) (5) Data frame handling\nI0714 23:52:49.343464 943 log.go:181] (0xc000976f20) Data frame received for 3\nI0714 23:52:49.343484 943 log.go:181] (0xc0008112c0) (3) Data frame handling\nI0714 23:52:49.345437 943 log.go:181] (0xc000976f20) Data frame received for 1\nI0714 23:52:49.345463 943 log.go:181] (0xc000c59ae0) (1) Data frame handling\nI0714 23:52:49.345486 943 log.go:181] (0xc000c59ae0) (1) Data frame sent\nI0714 23:52:49.345500 943 log.go:181] (0xc000976f20) (0xc000c59ae0) Stream removed, broadcasting: 1\nI0714 23:52:49.345664 943 log.go:181] (0xc000976f20) Go away received\nI0714 23:52:49.345895 943 log.go:181] (0xc000976f20) (0xc000c59ae0) Stream removed, broadcasting: 1\nI0714 23:52:49.345923 943 log.go:181] (0xc000976f20) (0xc0008112c0) Stream removed, broadcasting: 3\nI0714 23:52:49.345941 943 log.go:181] (0xc000976f20) (0xc000734b40) Stream removed, broadcasting: 5\n" Jul 14 23:52:49.351: INFO: stdout: "iptables" Jul 14 23:52:49.351: INFO: proxyMode: iptables Jul 14 23:52:49.360: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:49.405: INFO: Pod kube-proxy-mode-detector still exists Jul 14 23:52:51.405: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:51.412: INFO: Pod kube-proxy-mode-detector still exists Jul 14 23:52:53.405: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:53.409: INFO: Pod kube-proxy-mode-detector still exists Jul 14 23:52:55.405: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:55.409: INFO: Pod kube-proxy-mode-detector still exists Jul 14 23:52:57.405: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:57.409: INFO: Pod kube-proxy-mode-detector still exists Jul 14 23:52:59.405: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 14 23:52:59.409: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4213 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4213 I0714 23:52:59.482756 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4213, replica count: 3 I0714 23:53:02.533284 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0714 23:53:05.533520 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 14 23:53:05.546: INFO: Creating new exec pod Jul 14 23:53:10.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jul 14 23:53:10.881: INFO: stderr: "I0714 23:53:10.760574 961 log.go:181] (0xc000842fd0) (0xc000b8d360) Create stream\nI0714 23:53:10.760646 961 log.go:181] (0xc000842fd0) (0xc000b8d360) Stream added, broadcasting: 1\nI0714 23:53:10.766548 961 log.go:181] (0xc000842fd0) Reply frame received for 1\nI0714 23:53:10.766602 961 log.go:181] (0xc000842fd0) (0xc0003be0a0) Create stream\nI0714 23:53:10.766617 961 log.go:181] (0xc000842fd0) (0xc0003be0a0) Stream added, broadcasting: 3\nI0714 23:53:10.767328 961 log.go:181] (0xc000842fd0) Reply frame received for 3\nI0714 23:53:10.767350 961 log.go:181] (0xc000842fd0) (0xc0003bfea0) Create stream\nI0714 23:53:10.767357 961 log.go:181] (0xc000842fd0) (0xc0003bfea0) Stream added, broadcasting: 5\nI0714 23:53:10.767962 961 log.go:181] (0xc000842fd0) Reply frame received for 5\nI0714 23:53:10.855982 961 log.go:181] (0xc000842fd0) Data frame received for 5\nI0714 23:53:10.856027 961 log.go:181] (0xc0003bfea0) (5) Data frame handling\nI0714 23:53:10.856063 961 log.go:181] (0xc0003bfea0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0714 23:53:10.871994 961 log.go:181] (0xc000842fd0) Data frame received for 5\nI0714 23:53:10.872036 961 log.go:181] (0xc0003bfea0) (5) Data frame handling\nI0714 23:53:10.872052 961 log.go:181] (0xc0003bfea0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0714 23:53:10.872686 961 log.go:181] (0xc000842fd0) Data frame received for 5\nI0714 23:53:10.872956 961 log.go:181] (0xc0003bfea0) (5) Data frame handling\nI0714 23:53:10.873004 961 log.go:181] (0xc000842fd0) Data frame received for 3\nI0714 23:53:10.873023 961 log.go:181] (0xc0003be0a0) (3) Data frame handling\nI0714 23:53:10.875344 961 log.go:181] (0xc000842fd0) Data frame received for 1\nI0714 23:53:10.875474 961 log.go:181] (0xc000b8d360) (1) Data frame handling\nI0714 23:53:10.875543 961 log.go:181] (0xc000b8d360) (1) Data frame sent\nI0714 23:53:10.875587 961 log.go:181] (0xc000842fd0) (0xc000b8d360) Stream removed, broadcasting: 1\nI0714 23:53:10.875620 961 log.go:181] (0xc000842fd0) Go away received\nI0714 23:53:10.876151 961 log.go:181] (0xc000842fd0) (0xc000b8d360) Stream removed, broadcasting: 1\nI0714 23:53:10.876175 961 log.go:181] (0xc000842fd0) (0xc0003be0a0) Stream removed, broadcasting: 3\nI0714 23:53:10.876184 961 log.go:181] (0xc000842fd0) (0xc0003bfea0) Stream removed, broadcasting: 5\n" Jul 14 23:53:10.881: INFO: stdout: "" Jul 14 23:53:10.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c nc -zv -t -w 2 10.104.74.113 80' Jul 14 23:53:11.101: INFO: stderr: "I0714 23:53:11.036139 979 log.go:181] (0xc000a493f0) (0xc000b75ae0) Create stream\nI0714 23:53:11.036211 979 log.go:181] (0xc000a493f0) (0xc000b75ae0) Stream added, broadcasting: 1\nI0714 23:53:11.041197 979 log.go:181] (0xc000a493f0) Reply frame received for 1\nI0714 23:53:11.041233 979 log.go:181] (0xc000a493f0) (0xc0004d2280) Create stream\nI0714 23:53:11.041244 979 log.go:181] (0xc000a493f0) (0xc0004d2280) Stream added, broadcasting: 3\nI0714 23:53:11.042302 979 log.go:181] (0xc000a493f0) Reply frame received for 3\nI0714 23:53:11.042356 979 log.go:181] (0xc000a493f0) (0xc0002eca00) Create stream\nI0714 23:53:11.042391 979 log.go:181] (0xc000a493f0) (0xc0002eca00) Stream added, broadcasting: 5\nI0714 23:53:11.043277 979 log.go:181] (0xc000a493f0) Reply frame received for 5\nI0714 23:53:11.093700 979 log.go:181] (0xc000a493f0) Data frame received for 5\nI0714 23:53:11.093734 979 log.go:181] (0xc0002eca00) (5) Data frame handling\nI0714 23:53:11.093747 979 log.go:181] (0xc0002eca00) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.74.113 80\nConnection to 10.104.74.113 80 port [tcp/http] succeeded!\nI0714 23:53:11.093785 979 log.go:181] (0xc000a493f0) Data frame received for 3\nI0714 23:53:11.093841 979 log.go:181] (0xc000a493f0) Data frame received for 5\nI0714 23:53:11.093870 979 log.go:181] (0xc0002eca00) (5) Data frame handling\nI0714 23:53:11.093890 979 log.go:181] (0xc0004d2280) (3) Data frame handling\nI0714 23:53:11.095132 979 log.go:181] (0xc000a493f0) Data frame received for 1\nI0714 23:53:11.095150 979 log.go:181] (0xc000b75ae0) (1) Data frame handling\nI0714 23:53:11.095169 979 log.go:181] (0xc000b75ae0) (1) Data frame sent\nI0714 23:53:11.095189 979 log.go:181] (0xc000a493f0) (0xc000b75ae0) Stream removed, broadcasting: 1\nI0714 23:53:11.095213 979 log.go:181] (0xc000a493f0) Go away received\nI0714 23:53:11.095704 979 log.go:181] (0xc000a493f0) (0xc000b75ae0) Stream removed, broadcasting: 1\nI0714 23:53:11.095729 979 log.go:181] (0xc000a493f0) (0xc0004d2280) Stream removed, broadcasting: 3\nI0714 23:53:11.095744 979 log.go:181] (0xc000a493f0) (0xc0002eca00) Stream removed, broadcasting: 5\n" Jul 14 23:53:11.101: INFO: stdout: "" Jul 14 23:53:11.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32480' Jul 14 23:53:11.324: INFO: stderr: "I0714 23:53:11.244229 997 log.go:181] (0xc000260000) (0xc000af80a0) Create stream\nI0714 23:53:11.244304 997 log.go:181] (0xc000260000) (0xc000af80a0) Stream added, broadcasting: 1\nI0714 23:53:11.246534 997 log.go:181] (0xc000260000) Reply frame received for 1\nI0714 23:53:11.246575 997 log.go:181] (0xc000260000) (0xc00099e820) Create stream\nI0714 23:53:11.246589 997 log.go:181] (0xc000260000) (0xc00099e820) Stream added, broadcasting: 3\nI0714 23:53:11.247508 997 log.go:181] (0xc000260000) Reply frame received for 3\nI0714 23:53:11.247531 997 log.go:181] (0xc000260000) (0xc000888820) Create stream\nI0714 23:53:11.247540 997 log.go:181] (0xc000260000) (0xc000888820) Stream added, broadcasting: 5\nI0714 23:53:11.248519 997 log.go:181] (0xc000260000) Reply frame received for 5\nI0714 23:53:11.315785 997 log.go:181] (0xc000260000) Data frame received for 3\nI0714 23:53:11.315834 997 log.go:181] (0xc00099e820) (3) Data frame handling\nI0714 23:53:11.315873 997 log.go:181] (0xc000260000) Data frame received for 5\nI0714 23:53:11.315899 997 log.go:181] (0xc000888820) (5) Data frame handling\nI0714 23:53:11.315913 997 log.go:181] (0xc000888820) (5) Data frame sent\nI0714 23:53:11.315925 997 log.go:181] (0xc000260000) Data frame received for 5\nI0714 23:53:11.315935 997 log.go:181] (0xc000888820) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32480\nConnection to 172.18.0.14 32480 port [tcp/32480] succeeded!\nI0714 23:53:11.315961 997 log.go:181] (0xc000888820) (5) Data frame sent\nI0714 23:53:11.316259 997 log.go:181] (0xc000260000) Data frame received for 5\nI0714 23:53:11.316306 997 log.go:181] (0xc000888820) (5) Data frame handling\nI0714 23:53:11.318147 997 log.go:181] (0xc000260000) Data frame received for 1\nI0714 23:53:11.318180 997 log.go:181] (0xc000af80a0) (1) Data frame handling\nI0714 23:53:11.318197 997 log.go:181] (0xc000af80a0) (1) Data frame sent\nI0714 23:53:11.318218 997 log.go:181] (0xc000260000) (0xc000af80a0) Stream removed, broadcasting: 1\nI0714 23:53:11.318346 997 log.go:181] (0xc000260000) Go away received\nI0714 23:53:11.318850 997 log.go:181] (0xc000260000) (0xc000af80a0) Stream removed, broadcasting: 1\nI0714 23:53:11.318876 997 log.go:181] (0xc000260000) (0xc00099e820) Stream removed, broadcasting: 3\nI0714 23:53:11.318889 997 log.go:181] (0xc000260000) (0xc000888820) Stream removed, broadcasting: 5\n" Jul 14 23:53:11.324: INFO: stdout: "" Jul 14 23:53:11.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32480' Jul 14 23:53:11.526: INFO: stderr: "I0714 23:53:11.460049 1015 log.go:181] (0xc000d9d080) (0xc000b01540) Create stream\nI0714 23:53:11.460117 1015 log.go:181] (0xc000d9d080) (0xc000b01540) Stream added, broadcasting: 1\nI0714 23:53:11.468511 1015 log.go:181] (0xc000d9d080) Reply frame received for 1\nI0714 23:53:11.468552 1015 log.go:181] (0xc000d9d080) (0xc000a0a640) Create stream\nI0714 23:53:11.468563 1015 log.go:181] (0xc000d9d080) (0xc000a0a640) Stream added, broadcasting: 3\nI0714 23:53:11.469831 1015 log.go:181] (0xc000d9d080) Reply frame received for 3\nI0714 23:53:11.469898 1015 log.go:181] (0xc000d9d080) (0xc000a048c0) Create stream\nI0714 23:53:11.469922 1015 log.go:181] (0xc000d9d080) (0xc000a048c0) Stream added, broadcasting: 5\nI0714 23:53:11.470972 1015 log.go:181] (0xc000d9d080) Reply frame received for 5\nI0714 23:53:11.519657 1015 log.go:181] (0xc000d9d080) Data frame received for 5\nI0714 23:53:11.519683 1015 log.go:181] (0xc000a048c0) (5) Data frame handling\nI0714 23:53:11.519696 1015 log.go:181] (0xc000a048c0) (5) Data frame sent\nI0714 23:53:11.519706 1015 log.go:181] (0xc000d9d080) Data frame received for 5\nI0714 23:53:11.519715 1015 log.go:181] (0xc000a048c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32480\nConnection to 172.18.0.11 32480 port [tcp/32480] succeeded!\nI0714 23:53:11.520186 1015 log.go:181] (0xc000d9d080) Data frame received for 3\nI0714 23:53:11.520220 1015 log.go:181] (0xc000a0a640) (3) Data frame handling\nI0714 23:53:11.521957 1015 log.go:181] (0xc000d9d080) Data frame received for 1\nI0714 23:53:11.522005 1015 log.go:181] (0xc000b01540) (1) Data frame handling\nI0714 23:53:11.522050 1015 log.go:181] (0xc000b01540) (1) Data frame sent\nI0714 23:53:11.522092 1015 log.go:181] (0xc000d9d080) (0xc000b01540) Stream removed, broadcasting: 1\nI0714 23:53:11.522354 1015 log.go:181] (0xc000d9d080) Go away received\nI0714 23:53:11.522415 1015 log.go:181] (0xc000d9d080) (0xc000b01540) Stream removed, broadcasting: 1\nI0714 23:53:11.522430 1015 log.go:181] (0xc000d9d080) (0xc000a0a640) Stream removed, broadcasting: 3\nI0714 23:53:11.522438 1015 log.go:181] (0xc000d9d080) (0xc000a048c0) Stream removed, broadcasting: 5\n" Jul 14 23:53:11.526: INFO: stdout: "" Jul 14 23:53:11.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32480/ ; done' Jul 14 23:53:11.829: INFO: stderr: "I0714 23:53:11.666812 1033 log.go:181] (0xc000e093f0) (0xc000b36320) Create stream\nI0714 23:53:11.666854 1033 log.go:181] (0xc000e093f0) (0xc000b36320) Stream added, broadcasting: 1\nI0714 23:53:11.671721 1033 log.go:181] (0xc000e093f0) Reply frame received for 1\nI0714 23:53:11.671749 1033 log.go:181] (0xc000e093f0) (0xc000696c80) Create stream\nI0714 23:53:11.671758 1033 log.go:181] (0xc000e093f0) (0xc000696c80) Stream added, broadcasting: 3\nI0714 23:53:11.672874 1033 log.go:181] (0xc000e093f0) Reply frame received for 3\nI0714 23:53:11.672921 1033 log.go:181] (0xc000e093f0) (0xc0005920a0) Create stream\nI0714 23:53:11.672942 1033 log.go:181] (0xc000e093f0) (0xc0005920a0) Stream added, broadcasting: 5\nI0714 23:53:11.674016 1033 log.go:181] (0xc000e093f0) Reply frame received for 5\nI0714 23:53:11.723495 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.723529 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.723548 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.723571 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.723582 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.723595 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.728211 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.728242 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.728265 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.729291 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.729339 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.729363 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.729397 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.729420 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.729446 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.737214 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.737238 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.737250 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.737720 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.737733 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.737739 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.737752 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.737759 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.737772 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.744456 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.744468 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.744474 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.745481 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.745492 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.745497 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.745524 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.745549 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.745567 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.749154 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.749175 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.749196 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.749429 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.749449 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.749456 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.749468 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.749474 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.749482 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.753134 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.753153 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.753167 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.753694 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.753711 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.753729 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.753770 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.753788 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.753810 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.757031 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.757053 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.757067 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.757410 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.757429 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.757443 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.757455 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.757468 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.757480 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.763565 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.763592 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.763622 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.764043 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.764076 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.764108 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.764128 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.764151 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.764166 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.768494 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.768523 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.768550 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.769319 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.769342 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.769371 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.769398 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.769413 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.769426 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\nI0714 23:53:11.775744 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.775779 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.775813 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.776230 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.776271 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.776289 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.776314 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.776330 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.776344 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.781276 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.781290 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.781306 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.782063 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.782085 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.782099 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.782116 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.782132 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.782148 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.789681 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.789713 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.789736 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.790592 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.790620 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.790634 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.790655 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.790665 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.790676 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.796671 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.796695 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.796714 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.797772 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.797797 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.797822 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.797853 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.797859 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.797868 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.801777 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.801789 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.801794 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.802526 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.802552 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.802579 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.802595 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.802605 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.802619 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.806541 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.806567 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.806582 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.807436 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.807463 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.807475 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.807491 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.807500 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.807510 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.812795 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.812829 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.812847 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.813679 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.813698 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.813712 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\nI0714 23:53:11.813724 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.813739 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:11.813757 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.813770 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.813779 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.813795 1033 log.go:181] (0xc0005920a0) (5) Data frame sent\nI0714 23:53:11.820215 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.820242 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.820265 1033 log.go:181] (0xc000696c80) (3) Data frame sent\nI0714 23:53:11.821366 1033 log.go:181] (0xc000e093f0) Data frame received for 3\nI0714 23:53:11.821381 1033 log.go:181] (0xc000696c80) (3) Data frame handling\nI0714 23:53:11.821407 1033 log.go:181] (0xc000e093f0) Data frame received for 5\nI0714 23:53:11.821438 1033 log.go:181] (0xc0005920a0) (5) Data frame handling\nI0714 23:53:11.823298 1033 log.go:181] (0xc000e093f0) Data frame received for 1\nI0714 23:53:11.823318 1033 log.go:181] (0xc000b36320) (1) Data frame handling\nI0714 23:53:11.823333 1033 log.go:181] (0xc000b36320) (1) Data frame sent\nI0714 23:53:11.823358 1033 log.go:181] (0xc000e093f0) (0xc000b36320) Stream removed, broadcasting: 1\nI0714 23:53:11.823446 1033 log.go:181] (0xc000e093f0) Go away received\nI0714 23:53:11.823847 1033 log.go:181] (0xc000e093f0) (0xc000b36320) Stream removed, broadcasting: 1\nI0714 23:53:11.823866 1033 log.go:181] (0xc000e093f0) (0xc000696c80) Stream removed, broadcasting: 3\nI0714 23:53:11.823875 1033 log.go:181] (0xc000e093f0) (0xc0005920a0) Stream removed, broadcasting: 5\n" Jul 14 23:53:11.829: INFO: stdout: "\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w\naffinity-nodeport-timeout-76n8w" Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.829: INFO: Received response from host: affinity-nodeport-timeout-76n8w Jul 14 23:53:11.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32480/' Jul 14 23:53:12.038: INFO: stderr: "I0714 23:53:11.964706 1051 log.go:181] (0xc000946dc0) (0xc000b374a0) Create stream\nI0714 23:53:11.964913 1051 log.go:181] (0xc000946dc0) (0xc000b374a0) Stream added, broadcasting: 1\nI0714 23:53:11.969691 1051 log.go:181] (0xc000946dc0) Reply frame received for 1\nI0714 23:53:11.969735 1051 log.go:181] (0xc000946dc0) (0xc000b210e0) Create stream\nI0714 23:53:11.969750 1051 log.go:181] (0xc000946dc0) (0xc000b210e0) Stream added, broadcasting: 3\nI0714 23:53:11.970624 1051 log.go:181] (0xc000946dc0) Reply frame received for 3\nI0714 23:53:11.970652 1051 log.go:181] (0xc000946dc0) (0xc0009c8640) Create stream\nI0714 23:53:11.970660 1051 log.go:181] (0xc000946dc0) (0xc0009c8640) Stream added, broadcasting: 5\nI0714 23:53:11.971704 1051 log.go:181] (0xc000946dc0) Reply frame received for 5\nI0714 23:53:12.023675 1051 log.go:181] (0xc000946dc0) Data frame received for 5\nI0714 23:53:12.023705 1051 log.go:181] (0xc0009c8640) (5) Data frame handling\nI0714 23:53:12.023725 1051 log.go:181] (0xc0009c8640) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:12.029917 1051 log.go:181] (0xc000946dc0) Data frame received for 3\nI0714 23:53:12.029945 1051 log.go:181] (0xc000b210e0) (3) Data frame handling\nI0714 23:53:12.029974 1051 log.go:181] (0xc000b210e0) (3) Data frame sent\nI0714 23:53:12.031007 1051 log.go:181] (0xc000946dc0) Data frame received for 3\nI0714 23:53:12.031049 1051 log.go:181] (0xc000b210e0) (3) Data frame handling\nI0714 23:53:12.031091 1051 log.go:181] (0xc000946dc0) Data frame received for 5\nI0714 23:53:12.031184 1051 log.go:181] (0xc0009c8640) (5) Data frame handling\nI0714 23:53:12.032335 1051 log.go:181] (0xc000946dc0) Data frame received for 1\nI0714 23:53:12.032350 1051 log.go:181] (0xc000b374a0) (1) Data frame handling\nI0714 23:53:12.032358 1051 log.go:181] (0xc000b374a0) (1) Data frame sent\nI0714 23:53:12.032529 1051 log.go:181] (0xc000946dc0) (0xc000b374a0) Stream removed, broadcasting: 1\nI0714 23:53:12.032705 1051 log.go:181] (0xc000946dc0) Go away received\nI0714 23:53:12.033090 1051 log.go:181] (0xc000946dc0) (0xc000b374a0) Stream removed, broadcasting: 1\nI0714 23:53:12.033108 1051 log.go:181] (0xc000946dc0) (0xc000b210e0) Stream removed, broadcasting: 3\nI0714 23:53:12.033116 1051 log.go:181] (0xc000946dc0) (0xc0009c8640) Stream removed, broadcasting: 5\n" Jul 14 23:53:12.038: INFO: stdout: "affinity-nodeport-timeout-76n8w" Jul 14 23:53:27.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4213 execpod-affinityf85z2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32480/' Jul 14 23:53:27.266: INFO: stderr: "I0714 23:53:27.173805 1070 log.go:181] (0xc000c4afd0) (0xc000a0b4a0) Create stream\nI0714 23:53:27.173859 1070 log.go:181] (0xc000c4afd0) (0xc000a0b4a0) Stream added, broadcasting: 1\nI0714 23:53:27.178631 1070 log.go:181] (0xc000c4afd0) Reply frame received for 1\nI0714 23:53:27.178666 1070 log.go:181] (0xc000c4afd0) (0xc0009f7180) Create stream\nI0714 23:53:27.178677 1070 log.go:181] (0xc000c4afd0) (0xc0009f7180) Stream added, broadcasting: 3\nI0714 23:53:27.179495 1070 log.go:181] (0xc000c4afd0) Reply frame received for 3\nI0714 23:53:27.179553 1070 log.go:181] (0xc000c4afd0) (0xc000925e00) Create stream\nI0714 23:53:27.179568 1070 log.go:181] (0xc000c4afd0) (0xc000925e00) Stream added, broadcasting: 5\nI0714 23:53:27.180457 1070 log.go:181] (0xc000c4afd0) Reply frame received for 5\nI0714 23:53:27.251990 1070 log.go:181] (0xc000c4afd0) Data frame received for 5\nI0714 23:53:27.252024 1070 log.go:181] (0xc000925e00) (5) Data frame handling\nI0714 23:53:27.252043 1070 log.go:181] (0xc000925e00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32480/\nI0714 23:53:27.258067 1070 log.go:181] (0xc000c4afd0) Data frame received for 3\nI0714 23:53:27.258101 1070 log.go:181] (0xc0009f7180) (3) Data frame handling\nI0714 23:53:27.258122 1070 log.go:181] (0xc0009f7180) (3) Data frame sent\nI0714 23:53:27.259123 1070 log.go:181] (0xc000c4afd0) Data frame received for 3\nI0714 23:53:27.259162 1070 log.go:181] (0xc0009f7180) (3) Data frame handling\nI0714 23:53:27.259210 1070 log.go:181] (0xc000c4afd0) Data frame received for 5\nI0714 23:53:27.259233 1070 log.go:181] (0xc000925e00) (5) Data frame handling\nI0714 23:53:27.260912 1070 log.go:181] (0xc000c4afd0) Data frame received for 1\nI0714 23:53:27.260941 1070 log.go:181] (0xc000a0b4a0) (1) Data frame handling\nI0714 23:53:27.260955 1070 log.go:181] (0xc000a0b4a0) (1) Data frame sent\nI0714 23:53:27.261166 1070 log.go:181] (0xc000c4afd0) (0xc000a0b4a0) Stream removed, broadcasting: 1\nI0714 23:53:27.261219 1070 log.go:181] (0xc000c4afd0) Go away received\nI0714 23:53:27.261675 1070 log.go:181] (0xc000c4afd0) (0xc000a0b4a0) Stream removed, broadcasting: 1\nI0714 23:53:27.261696 1070 log.go:181] (0xc000c4afd0) (0xc0009f7180) Stream removed, broadcasting: 3\nI0714 23:53:27.261708 1070 log.go:181] (0xc000c4afd0) (0xc000925e00) Stream removed, broadcasting: 5\n" Jul 14 23:53:27.266: INFO: stdout: "affinity-nodeport-timeout-nltjd" Jul 14 23:53:27.266: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4213, will wait for the garbage collector to delete the pods Jul 14 23:53:27.374: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 28.623036ms Jul 14 23:53:27.775: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.128497ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:53:39.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4213" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:54.398 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":78,"skipped":1242,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:53:39.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 14 23:53:47.585: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 14 23:53:47.611: INFO: Pod pod-with-poststart-http-hook still exists Jul 14 23:53:49.612: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 14 23:53:49.617: INFO: Pod pod-with-poststart-http-hook still exists Jul 14 23:53:51.612: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 14 23:53:51.616: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:53:51.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4349" for this suite. • [SLOW TEST:12.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":294,"completed":79,"skipped":1243,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:53:51.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:53:52.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:53:54.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:53:56.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367632, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:53:59.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:53:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1141" for this suite. STEP: Destroying namespace "webhook-1141-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":294,"completed":80,"skipped":1253,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:53:59.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:55:59.680: INFO: Deleting pod "var-expansion-47a8ff16-2e3d-4208-b888-6a429f69dac2" in namespace "var-expansion-1507" Jul 14 23:55:59.685: INFO: Wait up to 5m0s for pod "var-expansion-47a8ff16-2e3d-4208-b888-6a429f69dac2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:03.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1507" for this suite. • [SLOW TEST:124.204 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":294,"completed":81,"skipped":1265,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:03.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 14 23:56:08.389: INFO: Successfully updated pod "pod-update-637b5790-0c90-46fe-ac24-a90a6c8b932c" STEP: verifying the updated pod is in kubernetes Jul 14 23:56:08.434: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:08.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-487" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":294,"completed":82,"skipped":1275,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:08.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-cd2f85a4-4cfc-43f0-9c2d-245de44a52e0 STEP: Creating secret with name s-test-opt-upd-68103142-1914-4e06-94cd-54cc32767486 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cd2f85a4-4cfc-43f0-9c2d-245de44a52e0 STEP: Updating secret s-test-opt-upd-68103142-1914-4e06-94cd-54cc32767486 STEP: Creating secret with name s-test-opt-create-504124c4-7f4f-46b5-852b-7cf0efd41f3f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8508" for this suite. • [SLOW TEST:10.341 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":83,"skipped":1298,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:18.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 Jul 14 23:56:18.859: FAIL: expected ingresses, got []v1.APIResource{v1.APIResource{Name:"networkpolicies", SingularName:"", Namespaced:true, Group:"", Version:"", Kind:"NetworkPolicy", Verbs:v1.Verbs{"create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"}, ShortNames:[]string{"netpol"}, Categories:[]string(nil), StorageVersionHash:"YpfwF18m1G8="}} Expected : false to equal : true Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func12.1() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:1050 +0xc0a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x337 k8s.io/kubernetes/test/e2e.TestE2E(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000ef2240, 0x4cc3740) /usr/local/go/src/testing/testing.go:991 +0xdc created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1042 +0x357 [AfterEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "ingress-3095". STEP: Found 0 events. Jul 14 23:56:18.865: INFO: POD NODE PHASE GRACE CONDITIONS Jul 14 23:56:18.865: INFO: Jul 14 23:56:18.868: INFO: Logging node info for node latest-control-plane Jul 14 23:56:18.870: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane fab71f49-3955-4070-ba3f-a34ab7dbcb1f 1218589 0 2020-07-10 10:29:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-07-14 23:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:54:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:54:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:54:10 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:54:10 +0000 UTC,LastTransitionTime:2020-07-10 10:30:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08e3d1af94e64c419f74d6afa70f0d43,SystemUUID:b2b9a347-3d8a-409e-9c43-3d2f455385e1,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:56:18.870: INFO: Logging kubelet events for node latest-control-plane Jul 14 23:56:18.873: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jul 14 23:56:18.892: INFO: kube-apiserver-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container kube-apiserver ready: true, restart count 0 Jul 14 23:56:18.893: INFO: kube-controller-manager-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 14 23:56:18.893: INFO: coredns-66bff467f8-xqch9 started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container coredns ready: true, restart count 0 Jul 14 23:56:18.893: INFO: local-path-provisioner-67795f75bd-wdgcp started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 14 23:56:18.893: INFO: etcd-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container etcd ready: true, restart count 0 Jul 14 23:56:18.893: INFO: kube-scheduler-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container kube-scheduler ready: true, restart count 1 Jul 14 23:56:18.893: INFO: kindnet-6gzv5 started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:56:18.893: INFO: kube-proxy-bvnbl started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:56:18.893: INFO: coredns-66bff467f8-lkg9r started at 2020-07-10 10:30:12 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:18.893: INFO: Container coredns ready: true, restart count 0 W0714 23:56:18.898448 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:56:18.978: INFO: Latency metrics for node latest-control-plane Jul 14 23:56:18.978: INFO: Logging node info for node latest-worker Jul 14 23:56:18.982: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker ee905599-6d86-471c-8264-80d61eb4d02f 1218905 0 2020-07-10 10:30:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-07-10 10:30:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2020-07-14 23:55:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:55:55 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:55:55 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:55:55 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:55:55 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:469a70212bc546bfb73ddea4d8686893,SystemUUID:ff574bf8-eaa0-484e-9d22-817c6038d2e3,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:56:18.982: INFO: Logging kubelet events for node latest-worker Jul 14 23:56:18.985: INFO: Logging pods the kubelet thinks is on node latest-worker Jul 14 23:56:19.003: INFO: kube-proxy-xb9q4 started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:19.003: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:56:19.003: INFO: kindnet-qt4jk started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:19.003: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:56:19.003: INFO: pod-secrets-5c1a5b7a-8e0c-4425-80c1-dc175e921f08 started at 2020-07-14 23:56:08 +0000 UTC (0+3 container statuses recorded) Jul 14 23:56:19.003: INFO: Container creates-volume-test ready: true, restart count 0 Jul 14 23:56:19.004: INFO: Container dels-volume-test ready: true, restart count 0 Jul 14 23:56:19.004: INFO: Container upds-volume-test ready: true, restart count 0 W0714 23:56:19.010006 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:56:19.070: INFO: Latency metrics for node latest-worker Jul 14 23:56:19.070: INFO: Logging node info for node latest-worker2 Jul 14 23:56:19.074: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0ed4e844-533c-4115-b90e-6070300ff379 1217405 0 2020-07-10 10:30:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-07-14 23:51:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-14 23:51:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-14 23:51:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-14 23:51:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-14 23:51:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58abb20e7a0b4d058f79f995dc3b2d92,SystemUUID:a7355a65-57ac-4117-ae3f-f79ca388e0d4,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 14 23:56:19.074: INFO: Logging kubelet events for node latest-worker2 Jul 14 23:56:19.078: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jul 14 23:56:19.097: INFO: kube-proxy-s596l started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:19.097: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:56:19.097: INFO: pod-update-637b5790-0c90-46fe-ac24-a90a6c8b932c started at 2020-07-14 23:56:03 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:19.097: INFO: Container nginx ready: false, restart count 0 Jul 14 23:56:19.097: INFO: kindnet-gkkxx started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 14 23:56:19.097: INFO: Container kindnet-cni ready: true, restart count 0 W0714 23:56:19.110785 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 14 23:56:19.151: INFO: Latency metrics for node latest-worker2 Jul 14 23:56:19.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-3095" for this suite. • Failure [0.405 seconds] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support creating Ingress API operations [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:56:18.859: expected ingresses, got []v1.APIResource{v1.APIResource{Name:"networkpolicies", SingularName:"", Namespaced:true, Group:"", Version:"", Kind:"NetworkPolicy", Verbs:v1.Verbs{"create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"}, ShortNames:[]string{"netpol"}, Categories:[]string(nil), StorageVersionHash:"YpfwF18m1G8="}} Expected : false to equal : true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:1050 ------------------------------ {"msg":"FAILED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":294,"completed":83,"skipped":1322,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:19.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:56:19.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:56:21.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 14 23:56:23.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367779, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:56:26.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:27.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9182" for this suite. STEP: Destroying namespace "webhook-9182-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.975 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":294,"completed":84,"skipped":1339,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:27.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-eabc9782-17ae-49d0-84f5-e3e43b4a2875 STEP: Creating a pod to test consume configMaps Jul 14 23:56:27.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982" in namespace "configmap-1992" to be "Succeeded or Failed" Jul 14 23:56:27.306: INFO: Pod "pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982": Phase="Pending", Reason="", readiness=false. Elapsed: 15.952549ms Jul 14 23:56:29.311: INFO: Pod "pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020378752s Jul 14 23:56:31.315: INFO: Pod "pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024618124s STEP: Saw pod success Jul 14 23:56:31.315: INFO: Pod "pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982" satisfied condition "Succeeded or Failed" Jul 14 23:56:31.320: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982 container configmap-volume-test: STEP: delete the pod Jul 14 23:56:31.337: INFO: Waiting for pod pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982 to disappear Jul 14 23:56:31.373: INFO: Pod pod-configmaps-acf0a1c3-780f-42af-a001-817a15c16982 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:31.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1992" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":85,"skipped":1342,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:31.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jul 14 23:56:31.478: INFO: Waiting up to 5m0s for pod "client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6" in namespace "containers-7799" to be "Succeeded or Failed" Jul 14 23:56:31.482: INFO: Pod "client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.618847ms Jul 14 23:56:33.486: INFO: Pod "client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00778355s Jul 14 23:56:35.491: INFO: Pod "client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012702293s STEP: Saw pod success Jul 14 23:56:35.491: INFO: Pod "client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6" satisfied condition "Succeeded or Failed" Jul 14 23:56:35.494: INFO: Trying to get logs from node latest-worker2 pod client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6 container test-container: STEP: delete the pod Jul 14 23:56:35.513: INFO: Waiting for pod client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6 to disappear Jul 14 23:56:35.557: INFO: Pod client-containers-784bc9a8-02a7-4f56-a087-d853f5c379f6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:35.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7799" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":294,"completed":86,"skipped":1359,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:35.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 14 23:56:36.164: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 14 23:56:38.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367796, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367796, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367796, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730367796, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 14 23:56:41.281: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 14 23:56:41.304: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-124" for this suite. STEP: Destroying namespace "webhook-124-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.865 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":294,"completed":87,"skipped":1363,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:41.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:45.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5266" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":88,"skipped":1398,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:45.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c8f9c200-d629-4f32-8820-9a173898f050 STEP: Creating a pod to test consume configMaps Jul 14 23:56:45.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b" in namespace "configmap-7480" to be "Succeeded or Failed" Jul 14 23:56:45.684: INFO: Pod "pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417182ms Jul 14 23:56:47.688: INFO: Pod "pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510817s Jul 14 23:56:49.693: INFO: Pod "pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012248065s STEP: Saw pod success Jul 14 23:56:49.693: INFO: Pod "pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b" satisfied condition "Succeeded or Failed" Jul 14 23:56:49.696: INFO: Trying to get logs from node latest-worker pod pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b container configmap-volume-test: STEP: delete the pod Jul 14 23:56:49.835: INFO: Waiting for pod pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b to disappear Jul 14 23:56:49.847: INFO: Pod pod-configmaps-bd2c5464-bed2-4723-87d1-6b8398bc108b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:49.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7480" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":89,"skipped":1434,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:49.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-db6d235f-28de-4ea7-a3df-a55d72da8e28 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:49.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6724" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":294,"completed":90,"skipped":1437,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:49.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 14 23:56:49.979: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 14 23:56:49.996: INFO: Waiting for terminating namespaces to be deleted... Jul 14 23:56:49.999: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 14 23:56:50.004: INFO: kindnet-qt4jk from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 14 23:56:50.004: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:56:50.004: INFO: kube-proxy-xb9q4 from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 14 23:56:50.004: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:56:50.004: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 14 23:56:50.009: INFO: kindnet-gkkxx from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 14 23:56:50.009: INFO: Container kindnet-cni ready: true, restart count 0 Jul 14 23:56:50.009: INFO: kube-proxy-s596l from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 14 23:56:50.009: INFO: Container kube-proxy ready: true, restart count 0 Jul 14 23:56:50.009: INFO: busybox-host-aliases96c3f86b-e099-4e38-8c10-383b210d7b86 from kubelet-test-5266 started at 2020-07-14 23:56:41 +0000 UTC (1 container statuses recorded) Jul 14 23:56:50.009: INFO: Container busybox-host-aliases96c3f86b-e099-4e38-8c10-383b210d7b86 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e8ff51c3-8662-42a9-858d-da154f3b4ea4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e8ff51c3-8662-42a9-858d-da154f3b4ea4 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e8ff51c3-8662-42a9-858d-da154f3b4ea4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:56:58.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-225" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.281 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":294,"completed":91,"skipped":1441,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:56:58.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 14 23:58:58.302: INFO: Deleting pod "var-expansion-f73bebc0-7b9d-4881-a402-787c902485cd" in namespace "var-expansion-4116" Jul 14 23:58:58.307: INFO: Wait up to 5m0s for pod "var-expansion-f73bebc0-7b9d-4881-a402-787c902485cd" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:59:00.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4116" for this suite. • [SLOW TEST:122.146 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":294,"completed":92,"skipped":1449,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:59:00.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-2375 STEP: creating replication controller nodeport-test in namespace services-2375 I0714 23:59:00.496899 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2375, replica count: 2 I0714 23:59:03.547267 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0714 23:59:06.547568 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 14 23:59:06.547: INFO: Creating new exec pod Jul 14 23:59:11.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-2375 execpodpqfkh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 14 23:59:11.803: INFO: stderr: "I0714 23:59:11.707940 1088 log.go:181] (0xc0007c14a0) (0xc0008f9360) Create stream\nI0714 23:59:11.707999 1088 log.go:181] (0xc0007c14a0) (0xc0008f9360) Stream added, broadcasting: 1\nI0714 23:59:11.710976 1088 log.go:181] (0xc0007c14a0) Reply frame received for 1\nI0714 23:59:11.711023 1088 log.go:181] (0xc0007c14a0) (0xc0008f9400) Create stream\nI0714 23:59:11.711036 1088 log.go:181] (0xc0007c14a0) (0xc0008f9400) Stream added, broadcasting: 3\nI0714 23:59:11.711969 1088 log.go:181] (0xc0007c14a0) Reply frame received for 3\nI0714 23:59:11.711999 1088 log.go:181] (0xc0007c14a0) (0xc0008f94a0) Create stream\nI0714 23:59:11.712009 1088 log.go:181] (0xc0007c14a0) (0xc0008f94a0) Stream added, broadcasting: 5\nI0714 23:59:11.713283 1088 log.go:181] (0xc0007c14a0) Reply frame received for 5\nI0714 23:59:11.796169 1088 log.go:181] (0xc0007c14a0) Data frame received for 5\nI0714 23:59:11.796216 1088 log.go:181] (0xc0008f94a0) (5) Data frame handling\nI0714 23:59:11.796243 1088 log.go:181] (0xc0008f94a0) (5) Data frame sent\nI0714 23:59:11.796259 1088 log.go:181] (0xc0007c14a0) Data frame received for 5\nI0714 23:59:11.796271 1088 log.go:181] (0xc0008f94a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0714 23:59:11.796341 1088 log.go:181] (0xc0008f94a0) (5) Data frame sent\nI0714 23:59:11.796435 1088 log.go:181] (0xc0007c14a0) Data frame received for 5\nI0714 23:59:11.796463 1088 log.go:181] (0xc0008f94a0) (5) Data frame handling\nI0714 23:59:11.796621 1088 log.go:181] (0xc0007c14a0) Data frame received for 3\nI0714 23:59:11.796635 1088 log.go:181] (0xc0008f9400) (3) Data frame handling\nI0714 23:59:11.798654 1088 log.go:181] (0xc0007c14a0) Data frame received for 1\nI0714 23:59:11.798668 1088 log.go:181] (0xc0008f9360) (1) Data frame handling\nI0714 23:59:11.798679 1088 log.go:181] (0xc0008f9360) (1) Data frame sent\nI0714 23:59:11.798691 1088 log.go:181] (0xc0007c14a0) (0xc0008f9360) Stream removed, broadcasting: 1\nI0714 23:59:11.798820 1088 log.go:181] (0xc0007c14a0) Go away received\nI0714 23:59:11.799023 1088 log.go:181] (0xc0007c14a0) (0xc0008f9360) Stream removed, broadcasting: 1\nI0714 23:59:11.799037 1088 log.go:181] (0xc0007c14a0) (0xc0008f9400) Stream removed, broadcasting: 3\nI0714 23:59:11.799043 1088 log.go:181] (0xc0007c14a0) (0xc0008f94a0) Stream removed, broadcasting: 5\n" Jul 14 23:59:11.803: INFO: stdout: "" Jul 14 23:59:11.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-2375 execpodpqfkh -- /bin/sh -x -c nc -zv -t -w 2 10.102.42.72 80' Jul 14 23:59:12.019: INFO: stderr: "I0714 23:59:11.936940 1107 log.go:181] (0xc0007e7130) (0xc000d85900) Create stream\nI0714 23:59:11.936994 1107 log.go:181] (0xc0007e7130) (0xc000d85900) Stream added, broadcasting: 1\nI0714 23:59:11.943196 1107 log.go:181] (0xc0007e7130) Reply frame received for 1\nI0714 23:59:11.943266 1107 log.go:181] (0xc0007e7130) (0xc000488140) Create stream\nI0714 23:59:11.943284 1107 log.go:181] (0xc0007e7130) (0xc000488140) Stream added, broadcasting: 3\nI0714 23:59:11.944598 1107 log.go:181] (0xc0007e7130) Reply frame received for 3\nI0714 23:59:11.944625 1107 log.go:181] (0xc0007e7130) (0xc0007aef00) Create stream\nI0714 23:59:11.944632 1107 log.go:181] (0xc0007e7130) (0xc0007aef00) Stream added, broadcasting: 5\nI0714 23:59:11.945832 1107 log.go:181] (0xc0007e7130) Reply frame received for 5\nI0714 23:59:12.011573 1107 log.go:181] (0xc0007e7130) Data frame received for 3\nI0714 23:59:12.011650 1107 log.go:181] (0xc000488140) (3) Data frame handling\nI0714 23:59:12.011739 1107 log.go:181] (0xc0007e7130) Data frame received for 5\nI0714 23:59:12.011799 1107 log.go:181] (0xc0007aef00) (5) Data frame handling\nI0714 23:59:12.011825 1107 log.go:181] (0xc0007aef00) (5) Data frame sent\nI0714 23:59:12.011847 1107 log.go:181] (0xc0007e7130) Data frame received for 5\nI0714 23:59:12.011861 1107 log.go:181] (0xc0007aef00) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.42.72 80\nConnection to 10.102.42.72 80 port [tcp/http] succeeded!\nI0714 23:59:12.013464 1107 log.go:181] (0xc0007e7130) Data frame received for 1\nI0714 23:59:12.013492 1107 log.go:181] (0xc000d85900) (1) Data frame handling\nI0714 23:59:12.013517 1107 log.go:181] (0xc000d85900) (1) Data frame sent\nI0714 23:59:12.013633 1107 log.go:181] (0xc0007e7130) (0xc000d85900) Stream removed, broadcasting: 1\nI0714 23:59:12.013678 1107 log.go:181] (0xc0007e7130) Go away received\nI0714 23:59:12.014011 1107 log.go:181] (0xc0007e7130) (0xc000d85900) Stream removed, broadcasting: 1\nI0714 23:59:12.014044 1107 log.go:181] (0xc0007e7130) (0xc000488140) Stream removed, broadcasting: 3\nI0714 23:59:12.014063 1107 log.go:181] (0xc0007e7130) (0xc0007aef00) Stream removed, broadcasting: 5\n" Jul 14 23:59:12.019: INFO: stdout: "" Jul 14 23:59:12.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-2375 execpodpqfkh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32056' Jul 14 23:59:12.228: INFO: stderr: "I0714 23:59:12.145509 1126 log.go:181] (0xc000a60d10) (0xc000cefa40) Create stream\nI0714 23:59:12.145563 1126 log.go:181] (0xc000a60d10) (0xc000cefa40) Stream added, broadcasting: 1\nI0714 23:59:12.151009 1126 log.go:181] (0xc000a60d10) Reply frame received for 1\nI0714 23:59:12.151094 1126 log.go:181] (0xc000a60d10) (0xc000b5c280) Create stream\nI0714 23:59:12.151117 1126 log.go:181] (0xc000a60d10) (0xc000b5c280) Stream added, broadcasting: 3\nI0714 23:59:12.152124 1126 log.go:181] (0xc000a60d10) Reply frame received for 3\nI0714 23:59:12.152147 1126 log.go:181] (0xc000a60d10) (0xc000458320) Create stream\nI0714 23:59:12.152159 1126 log.go:181] (0xc000a60d10) (0xc000458320) Stream added, broadcasting: 5\nI0714 23:59:12.153215 1126 log.go:181] (0xc000a60d10) Reply frame received for 5\nI0714 23:59:12.220679 1126 log.go:181] (0xc000a60d10) Data frame received for 3\nI0714 23:59:12.220844 1126 log.go:181] (0xc000b5c280) (3) Data frame handling\nI0714 23:59:12.220895 1126 log.go:181] (0xc000a60d10) Data frame received for 5\nI0714 23:59:12.220946 1126 log.go:181] (0xc000458320) (5) Data frame handling\nI0714 23:59:12.220976 1126 log.go:181] (0xc000458320) (5) Data frame sent\nI0714 23:59:12.221002 1126 log.go:181] (0xc000a60d10) Data frame received for 5\nI0714 23:59:12.221032 1126 log.go:181] (0xc000458320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32056\nConnection to 172.18.0.14 32056 port [tcp/32056] succeeded!\nI0714 23:59:12.222337 1126 log.go:181] (0xc000a60d10) Data frame received for 1\nI0714 23:59:12.222356 1126 log.go:181] (0xc000cefa40) (1) Data frame handling\nI0714 23:59:12.222366 1126 log.go:181] (0xc000cefa40) (1) Data frame sent\nI0714 23:59:12.222379 1126 log.go:181] (0xc000a60d10) (0xc000cefa40) Stream removed, broadcasting: 1\nI0714 23:59:12.222430 1126 log.go:181] (0xc000a60d10) Go away received\nI0714 23:59:12.222711 1126 log.go:181] (0xc000a60d10) (0xc000cefa40) Stream removed, broadcasting: 1\nI0714 23:59:12.222734 1126 log.go:181] (0xc000a60d10) (0xc000b5c280) Stream removed, broadcasting: 3\nI0714 23:59:12.222750 1126 log.go:181] (0xc000a60d10) (0xc000458320) Stream removed, broadcasting: 5\n" Jul 14 23:59:12.229: INFO: stdout: "" Jul 14 23:59:12.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-2375 execpodpqfkh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32056' Jul 14 23:59:12.456: INFO: stderr: "I0714 23:59:12.367473 1144 log.go:181] (0xc0006bd550) (0xc000a832c0) Create stream\nI0714 23:59:12.367520 1144 log.go:181] (0xc0006bd550) (0xc000a832c0) Stream added, broadcasting: 1\nI0714 23:59:12.370537 1144 log.go:181] (0xc0006bd550) Reply frame received for 1\nI0714 23:59:12.370580 1144 log.go:181] (0xc0006bd550) (0xc000894960) Create stream\nI0714 23:59:12.370596 1144 log.go:181] (0xc0006bd550) (0xc000894960) Stream added, broadcasting: 3\nI0714 23:59:12.371703 1144 log.go:181] (0xc0006bd550) Reply frame received for 3\nI0714 23:59:12.371744 1144 log.go:181] (0xc0006bd550) (0xc0008c1860) Create stream\nI0714 23:59:12.371758 1144 log.go:181] (0xc0006bd550) (0xc0008c1860) Stream added, broadcasting: 5\nI0714 23:59:12.372702 1144 log.go:181] (0xc0006bd550) Reply frame received for 5\nI0714 23:59:12.448817 1144 log.go:181] (0xc0006bd550) Data frame received for 5\nI0714 23:59:12.448846 1144 log.go:181] (0xc0008c1860) (5) Data frame handling\nI0714 23:59:12.448856 1144 log.go:181] (0xc0008c1860) (5) Data frame sent\nI0714 23:59:12.448865 1144 log.go:181] (0xc0006bd550) Data frame received for 5\nI0714 23:59:12.448871 1144 log.go:181] (0xc0008c1860) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32056\nConnection to 172.18.0.11 32056 port [tcp/32056] succeeded!\nI0714 23:59:12.448888 1144 log.go:181] (0xc0008c1860) (5) Data frame sent\nI0714 23:59:12.449277 1144 log.go:181] (0xc0006bd550) Data frame received for 3\nI0714 23:59:12.449301 1144 log.go:181] (0xc000894960) (3) Data frame handling\nI0714 23:59:12.449454 1144 log.go:181] (0xc0006bd550) Data frame received for 5\nI0714 23:59:12.449478 1144 log.go:181] (0xc0008c1860) (5) Data frame handling\nI0714 23:59:12.451006 1144 log.go:181] (0xc0006bd550) Data frame received for 1\nI0714 23:59:12.451020 1144 log.go:181] (0xc000a832c0) (1) Data frame handling\nI0714 23:59:12.451048 1144 log.go:181] (0xc000a832c0) (1) Data frame sent\nI0714 23:59:12.451075 1144 log.go:181] (0xc0006bd550) (0xc000a832c0) Stream removed, broadcasting: 1\nI0714 23:59:12.451113 1144 log.go:181] (0xc0006bd550) Go away received\nI0714 23:59:12.451390 1144 log.go:181] (0xc0006bd550) (0xc000a832c0) Stream removed, broadcasting: 1\nI0714 23:59:12.451404 1144 log.go:181] (0xc0006bd550) (0xc000894960) Stream removed, broadcasting: 3\nI0714 23:59:12.451411 1144 log.go:181] (0xc0006bd550) (0xc0008c1860) Stream removed, broadcasting: 5\n" Jul 14 23:59:12.456: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:59:12.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2375" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:12.120 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":294,"completed":93,"skipped":1454,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:59:12.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 14 23:59:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2111" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":294,"completed":94,"skipped":1475,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 14 23:59:16.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-fb7440b1-275d-4bb8-a145-8890065a70f9 in namespace container-probe-3791 Jul 14 23:59:22.772: INFO: Started pod busybox-fb7440b1-275d-4bb8-a145-8890065a70f9 in namespace container-probe-3791 STEP: checking the pod's current state and verifying that restartCount is present Jul 14 23:59:22.776: INFO: Initial restart count of pod busybox-fb7440b1-275d-4bb8-a145-8890065a70f9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:03:23.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3791" for this suite. • [SLOW TEST:246.871 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":95,"skipped":1476,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:03:23.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 15 00:03:29.920: INFO: 0 pods remaining Jul 15 00:03:29.920: INFO: 0 pods has nil DeletionTimestamp Jul 15 00:03:29.920: INFO: STEP: Gathering metrics W0715 00:03:31.483100 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 00:03:33.890: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:03:33.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7971" for this suite. • [SLOW TEST:10.418 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":294,"completed":96,"skipped":1479,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:03:33.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4782 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 00:03:34.304: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 15 00:03:34.570: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:03:36.574: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:03:38.575: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:03:40.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:03:42.575: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:03:44.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:03:46.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:03:48.575: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 15 00:03:48.581: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 15 00:03:50.585: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 15 00:03:54.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostname&protocol=http&host=10.244.2.18&port=8080&tries=1'] Namespace:pod-network-test-4782 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:03:54.622: INFO: >>> kubeConfig: /root/.kube/config I0715 00:03:54.653517 7 log.go:181] (0xc002f77550) (0xc00156d860) Create stream I0715 00:03:54.653544 7 log.go:181] (0xc002f77550) (0xc00156d860) Stream added, broadcasting: 1 I0715 00:03:54.655434 7 log.go:181] (0xc002f77550) Reply frame received for 1 I0715 00:03:54.655482 7 log.go:181] (0xc002f77550) (0xc00156d900) Create stream I0715 00:03:54.655494 7 log.go:181] (0xc002f77550) (0xc00156d900) Stream added, broadcasting: 3 I0715 00:03:54.656218 7 log.go:181] (0xc002f77550) Reply frame received for 3 I0715 00:03:54.656252 7 log.go:181] (0xc002f77550) (0xc000f85a40) Create stream I0715 00:03:54.656264 7 log.go:181] (0xc002f77550) (0xc000f85a40) Stream added, broadcasting: 5 I0715 00:03:54.657119 7 log.go:181] (0xc002f77550) Reply frame received for 5 I0715 00:03:54.720846 7 log.go:181] (0xc002f77550) Data frame received for 3 I0715 00:03:54.720897 7 log.go:181] (0xc00156d900) (3) Data frame handling I0715 00:03:54.720930 7 log.go:181] (0xc00156d900) (3) Data frame sent I0715 00:03:54.721510 7 log.go:181] (0xc002f77550) Data frame received for 3 I0715 00:03:54.721543 7 log.go:181] (0xc00156d900) (3) Data frame handling I0715 00:03:54.722508 7 log.go:181] (0xc002f77550) Data frame received for 5 I0715 00:03:54.722553 7 log.go:181] (0xc000f85a40) (5) Data frame handling I0715 00:03:54.724512 7 log.go:181] (0xc002f77550) Data frame received for 1 I0715 00:03:54.724527 7 log.go:181] (0xc00156d860) (1) Data frame handling I0715 00:03:54.724550 7 log.go:181] (0xc00156d860) (1) Data frame sent I0715 00:03:54.724566 7 log.go:181] (0xc002f77550) (0xc00156d860) Stream removed, broadcasting: 1 I0715 00:03:54.724595 7 log.go:181] (0xc002f77550) Go away received I0715 00:03:54.724640 7 log.go:181] (0xc002f77550) (0xc00156d860) Stream removed, broadcasting: 1 I0715 00:03:54.724651 7 log.go:181] (0xc002f77550) (0xc00156d900) Stream removed, broadcasting: 3 I0715 00:03:54.724662 7 log.go:181] (0xc002f77550) (0xc000f85a40) Stream removed, broadcasting: 5 Jul 15 00:03:54.724: INFO: Waiting for responses: map[] Jul 15 00:03:54.727: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostname&protocol=http&host=10.244.1.128&port=8080&tries=1'] Namespace:pod-network-test-4782 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:03:54.727: INFO: >>> kubeConfig: /root/.kube/config I0715 00:03:54.756705 7 log.go:181] (0xc002f77c30) (0xc0010ae140) Create stream I0715 00:03:54.756810 7 log.go:181] (0xc002f77c30) (0xc0010ae140) Stream added, broadcasting: 1 I0715 00:03:54.758950 7 log.go:181] (0xc002f77c30) Reply frame received for 1 I0715 00:03:54.759000 7 log.go:181] (0xc002f77c30) (0xc002673680) Create stream I0715 00:03:54.759023 7 log.go:181] (0xc002f77c30) (0xc002673680) Stream added, broadcasting: 3 I0715 00:03:54.760037 7 log.go:181] (0xc002f77c30) Reply frame received for 3 I0715 00:03:54.760093 7 log.go:181] (0xc002f77c30) (0xc002673720) Create stream I0715 00:03:54.760142 7 log.go:181] (0xc002f77c30) (0xc002673720) Stream added, broadcasting: 5 I0715 00:03:54.761291 7 log.go:181] (0xc002f77c30) Reply frame received for 5 I0715 00:03:54.817818 7 log.go:181] (0xc002f77c30) Data frame received for 3 I0715 00:03:54.817850 7 log.go:181] (0xc002673680) (3) Data frame handling I0715 00:03:54.817871 7 log.go:181] (0xc002673680) (3) Data frame sent I0715 00:03:54.818215 7 log.go:181] (0xc002f77c30) Data frame received for 3 I0715 00:03:54.818251 7 log.go:181] (0xc002673680) (3) Data frame handling I0715 00:03:54.818583 7 log.go:181] (0xc002f77c30) Data frame received for 5 I0715 00:03:54.818607 7 log.go:181] (0xc002673720) (5) Data frame handling I0715 00:03:54.823032 7 log.go:181] (0xc002f77c30) Data frame received for 1 I0715 00:03:54.823070 7 log.go:181] (0xc0010ae140) (1) Data frame handling I0715 00:03:54.823093 7 log.go:181] (0xc0010ae140) (1) Data frame sent I0715 00:03:54.823110 7 log.go:181] (0xc002f77c30) (0xc0010ae140) Stream removed, broadcasting: 1 I0715 00:03:54.823173 7 log.go:181] (0xc002f77c30) Go away received I0715 00:03:54.823274 7 log.go:181] (0xc002f77c30) (0xc0010ae140) Stream removed, broadcasting: 1 I0715 00:03:54.823318 7 log.go:181] (0xc002f77c30) (0xc002673680) Stream removed, broadcasting: 3 I0715 00:03:54.823348 7 log.go:181] (0xc002f77c30) (0xc002673720) Stream removed, broadcasting: 5 Jul 15 00:03:54.823: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:03:54.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4782" for this suite. • [SLOW TEST:20.933 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":294,"completed":97,"skipped":1486,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:03:54.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:04:10.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8509" for this suite. STEP: Destroying namespace "nsdeletetest-8535" for this suite. Jul 15 00:04:10.082: INFO: Namespace nsdeletetest-8535 was already deleted STEP: Destroying namespace "nsdeletetest-7267" for this suite. • [SLOW TEST:15.254 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":294,"completed":98,"skipped":1519,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:04:10.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:04:10.152: INFO: Creating deployment "test-recreate-deployment" Jul 15 00:04:10.166: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 15 00:04:10.197: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jul 15 00:04:12.364: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 15 00:04:12.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368250, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368250, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368250, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368250, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6bf85785bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:04:14.380: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 15 00:04:14.392: INFO: Updating deployment test-recreate-deployment Jul 15 00:04:14.392: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 15 00:04:15.008: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1060 /apis/apps/v1/namespaces/deployment-1060/deployments/test-recreate-deployment 6a96f3cf-22c7-41db-b28a-37f901d2c0fc 1221194 2 2020-07-15 00:04:10 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003b0db48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-15 00:04:14 +0000 UTC,LastTransitionTime:2020-07-15 00:04:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-07-15 00:04:14 +0000 UTC,LastTransitionTime:2020-07-15 00:04:10 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jul 15 00:04:15.023: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-1060 /apis/apps/v1/namespaces/deployment-1060/replicasets/test-recreate-deployment-d5667d9c7 0502f7fa-dfeb-4885-9774-3958a3cc2499 1221192 1 2020-07-15 00:04:14 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6a96f3cf-22c7-41db-b28a-37f901d2c0fc 0xc001c89a30 0xc001c89a31}] [] [{kube-controller-manager Update apps/v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6a96f3cf-22c7-41db-b28a-37f901d2c0fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c89aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 15 00:04:15.023: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 15 00:04:15.023: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6bf85785bb deployment-1060 /apis/apps/v1/namespaces/deployment-1060/replicasets/test-recreate-deployment-6bf85785bb 01aca2e1-b9bf-401e-b4b6-6d3f07a014e1 1221182 2 2020-07-15 00:04:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:6bf85785bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6a96f3cf-22c7-41db-b28a-37f901d2c0fc 0xc001c89937 0xc001c89938}] [] [{kube-controller-manager Update apps/v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6a96f3cf-22c7-41db-b28a-37f901d2c0fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6bf85785bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6bf85785bb] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001c899c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 15 00:04:15.027: INFO: Pod "test-recreate-deployment-d5667d9c7-75rrp" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-75rrp test-recreate-deployment-d5667d9c7- deployment-1060 /api/v1/namespaces/deployment-1060/pods/test-recreate-deployment-d5667d9c7-75rrp 48f02c54-3674-4d73-a545-017f577ffdde 1221195 0 2020-07-15 00:04:14 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 0502f7fa-dfeb-4885-9774-3958a3cc2499 0xc001c89f80 0xc001c89f81}] [] [{kube-controller-manager Update v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0502f7fa-dfeb-4885-9774-3958a3cc2499\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:04:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7zv7x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7zv7x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7zv7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:04:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:04:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:04:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:04:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:04:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:04:15.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1060" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":99,"skipped":1565,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:04:15.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 15 00:04:23.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 00:04:23.504: INFO: Pod pod-with-prestop-http-hook still exists Jul 15 00:04:25.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 00:04:25.509: INFO: Pod pod-with-prestop-http-hook still exists Jul 15 00:04:27.504: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 15 00:04:27.529: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:04:27.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3516" for this suite. • [SLOW TEST:12.518 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":294,"completed":100,"skipped":1570,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:04:27.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-14e3be6c-2df4-4769-9e0b-cb99f3e7977e STEP: Creating configMap with name cm-test-opt-upd-78f1c978-f171-4831-9638-e0f184391857 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-14e3be6c-2df4-4769-9e0b-cb99f3e7977e STEP: Updating configmap cm-test-opt-upd-78f1c978-f171-4831-9638-e0f184391857 STEP: Creating configMap with name cm-test-opt-create-0eee1baa-d550-4478-b66c-0b58efaa73b5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:05:42.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3924" for this suite. • [SLOW TEST:74.540 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":101,"skipped":1575,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:05:42.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-szkm STEP: Creating a pod to test atomic-volume-subpath Jul 15 00:05:42.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-szkm" in namespace "subpath-7839" to be "Succeeded or Failed" Jul 15 00:05:42.216: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033979ms Jul 15 00:05:44.375: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161918529s Jul 15 00:05:46.379: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 4.166760502s Jul 15 00:05:48.530: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 6.317559343s Jul 15 00:05:50.534: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.321790545s Jul 15 00:05:52.538: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.32519953s Jul 15 00:05:54.542: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.329103884s Jul 15 00:05:56.545: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.332738029s Jul 15 00:05:58.549: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.336551534s Jul 15 00:06:00.553: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.340749908s Jul 15 00:06:02.557: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.343937394s Jul 15 00:06:04.597: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.384439819s Jul 15 00:06:06.601: INFO: Pod "pod-subpath-test-downwardapi-szkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.388151391s STEP: Saw pod success Jul 15 00:06:06.601: INFO: Pod "pod-subpath-test-downwardapi-szkm" satisfied condition "Succeeded or Failed" Jul 15 00:06:06.603: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-szkm container test-container-subpath-downwardapi-szkm: STEP: delete the pod Jul 15 00:06:06.681: INFO: Waiting for pod pod-subpath-test-downwardapi-szkm to disappear Jul 15 00:06:06.684: INFO: Pod pod-subpath-test-downwardapi-szkm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-szkm Jul 15 00:06:06.684: INFO: Deleting pod "pod-subpath-test-downwardapi-szkm" in namespace "subpath-7839" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:06.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7839" for this suite. • [SLOW TEST:24.642 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":294,"completed":102,"skipped":1577,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:06.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:06:06.843: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:07.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6915" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":294,"completed":103,"skipped":1578,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:07.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 15 00:06:07.952: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 00:06:07.960: INFO: Waiting for terminating namespaces to be deleted... Jul 15 00:06:07.965: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 15 00:06:07.969: INFO: kindnet-qt4jk from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:06:07.970: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:06:07.970: INFO: kube-proxy-xb9q4 from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:06:07.970: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:06:07.970: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 15 00:06:07.974: INFO: kindnet-gkkxx from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:06:07.974: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:06:07.974: INFO: kube-proxy-s596l from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:06:07.974: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jul 15 00:06:08.088: INFO: Pod kindnet-gkkxx requesting resource cpu=100m on Node latest-worker2 Jul 15 00:06:08.088: INFO: Pod kindnet-qt4jk requesting resource cpu=100m on Node latest-worker Jul 15 00:06:08.088: INFO: Pod kube-proxy-s596l requesting resource cpu=0m on Node latest-worker2 Jul 15 00:06:08.088: INFO: Pod kube-proxy-xb9q4 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Jul 15 00:06:08.088: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Jul 15 00:06:08.094: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-10e15819-d895-48de-8155-a6f3be861a0a.1621c44aaea28e17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6566/filler-pod-10e15819-d895-48de-8155-a6f3be861a0a to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-10e15819-d895-48de-8155-a6f3be861a0a.1621c44afc00ba42], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-10e15819-d895-48de-8155-a6f3be861a0a.1621c44b6c5db64d], Reason = [Created], Message = [Created container filler-pod-10e15819-d895-48de-8155-a6f3be861a0a] STEP: Considering event: Type = [Normal], Name = [filler-pod-10e15819-d895-48de-8155-a6f3be861a0a.1621c44b7d34c793], Reason = [Started], Message = [Started container filler-pod-10e15819-d895-48de-8155-a6f3be861a0a] STEP: Considering event: Type = [Normal], Name = [filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5.1621c44ab0566753], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6566/filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5.1621c44b106ca28e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5.1621c44b77d35592], Reason = [Created], Message = [Created container filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5] STEP: Considering event: Type = [Normal], Name = [filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5.1621c44b8a40efaf], Reason = [Started], Message = [Started container filler-pod-167ba1fb-ab12-4b0d-848d-161b8ba029f5] STEP: Considering event: Type = [Warning], Name = [additional-pod.1621c44ba7f688ff], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1621c44ba99ceb3c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:13.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6566" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.449 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":294,"completed":104,"skipped":1582,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:13.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:06:13.376: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-568" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":294,"completed":105,"skipped":1638,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:14.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:06:14.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb" in namespace "downward-api-2115" to be "Succeeded or Failed" Jul 15 00:06:14.210: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.413008ms Jul 15 00:06:16.215: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018045334s Jul 15 00:06:18.261: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064154904s Jul 15 00:06:20.573: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb": Phase="Running", Reason="", readiness=true. Elapsed: 6.376053066s Jul 15 00:06:22.577: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.380554374s STEP: Saw pod success Jul 15 00:06:22.577: INFO: Pod "downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb" satisfied condition "Succeeded or Failed" Jul 15 00:06:22.583: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb container client-container: STEP: delete the pod Jul 15 00:06:22.664: INFO: Waiting for pod downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb to disappear Jul 15 00:06:22.677: INFO: Pod downwardapi-volume-4e8c8b0e-c873-4040-837e-c79e12d333cb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:22.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2115" for this suite. • [SLOW TEST:8.620 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":106,"skipped":1639,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:22.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 15 00:06:22.834: INFO: Waiting up to 5m0s for pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed" in namespace "emptydir-2750" to be "Succeeded or Failed" Jul 15 00:06:22.870: INFO: Pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed": Phase="Pending", Reason="", readiness=false. Elapsed: 35.663701ms Jul 15 00:06:24.910: INFO: Pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075417807s Jul 15 00:06:26.914: INFO: Pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed": Phase="Running", Reason="", readiness=true. Elapsed: 4.079615908s Jul 15 00:06:28.919: INFO: Pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084824784s STEP: Saw pod success Jul 15 00:06:28.919: INFO: Pod "pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed" satisfied condition "Succeeded or Failed" Jul 15 00:06:28.921: INFO: Trying to get logs from node latest-worker2 pod pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed container test-container: STEP: delete the pod Jul 15 00:06:28.943: INFO: Waiting for pod pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed to disappear Jul 15 00:06:28.959: INFO: Pod pod-f97a4334-c4f4-4af1-ab58-02f334eb88ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:28.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2750" for this suite. • [SLOW TEST:6.282 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":107,"skipped":1649,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:28.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:06:29.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4" in namespace "downward-api-7833" to be "Succeeded or Failed" Jul 15 00:06:29.038: INFO: Pod "downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.861011ms Jul 15 00:06:31.042: INFO: Pod "downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006610024s Jul 15 00:06:33.047: INFO: Pod "downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010886067s STEP: Saw pod success Jul 15 00:06:33.047: INFO: Pod "downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4" satisfied condition "Succeeded or Failed" Jul 15 00:06:33.049: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4 container client-container: STEP: delete the pod Jul 15 00:06:33.068: INFO: Waiting for pod downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4 to disappear Jul 15 00:06:33.079: INFO: Pod downwardapi-volume-c4826f17-686d-4891-8c00-37faa27d4de4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:33.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7833" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":108,"skipped":1664,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:33.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:06:33.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf" in namespace "downward-api-3628" to be "Succeeded or Failed" Jul 15 00:06:33.253: INFO: Pod "downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630694ms Jul 15 00:06:35.257: INFO: Pod "downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008812434s Jul 15 00:06:37.260: INFO: Pod "downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012492277s STEP: Saw pod success Jul 15 00:06:37.261: INFO: Pod "downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf" satisfied condition "Succeeded or Failed" Jul 15 00:06:37.263: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf container client-container: STEP: delete the pod Jul 15 00:06:37.532: INFO: Waiting for pod downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf to disappear Jul 15 00:06:37.546: INFO: Pod downwardapi-volume-010e0203-dd8b-4c54-96cc-2f2d9c699baf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:37.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3628" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":109,"skipped":1680,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:37.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:06:37.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495" in namespace "downward-api-4234" to be "Succeeded or Failed" Jul 15 00:06:37.708: INFO: Pod "downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12032ms Jul 15 00:06:39.712: INFO: Pod "downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008235106s Jul 15 00:06:41.716: INFO: Pod "downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012316829s STEP: Saw pod success Jul 15 00:06:41.716: INFO: Pod "downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495" satisfied condition "Succeeded or Failed" Jul 15 00:06:41.719: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495 container client-container: STEP: delete the pod Jul 15 00:06:41.868: INFO: Waiting for pod downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495 to disappear Jul 15 00:06:41.877: INFO: Pod downwardapi-volume-23f9f091-6a1d-49c0-9fd3-4b17a8d31495 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:41.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4234" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":110,"skipped":1706,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:41.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:06:42.717: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:06:44.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368402, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368402, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368402, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368402, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:06:47.890: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:06:49.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6372" for this suite. STEP: Destroying namespace "webhook-6372-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.434 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":294,"completed":111,"skipped":1711,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:06:49.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-cfgz STEP: Creating a pod to test atomic-volume-subpath Jul 15 00:06:49.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cfgz" in namespace "subpath-5387" to be "Succeeded or Failed" Jul 15 00:06:49.471: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.122909ms Jul 15 00:06:51.475: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023189079s Jul 15 00:06:53.480: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 4.027777431s Jul 15 00:06:55.484: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 6.031754482s Jul 15 00:06:57.489: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 8.036376062s Jul 15 00:06:59.493: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 10.040512786s Jul 15 00:07:01.497: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 12.044450447s Jul 15 00:07:03.501: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 14.048900673s Jul 15 00:07:05.505: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 16.05310669s Jul 15 00:07:07.510: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 18.057673114s Jul 15 00:07:09.513: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 20.06107593s Jul 15 00:07:11.517: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Running", Reason="", readiness=true. Elapsed: 22.065299785s Jul 15 00:07:13.521: INFO: Pod "pod-subpath-test-configmap-cfgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068882585s STEP: Saw pod success Jul 15 00:07:13.521: INFO: Pod "pod-subpath-test-configmap-cfgz" satisfied condition "Succeeded or Failed" Jul 15 00:07:13.523: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-cfgz container test-container-subpath-configmap-cfgz: STEP: delete the pod Jul 15 00:07:13.543: INFO: Waiting for pod pod-subpath-test-configmap-cfgz to disappear Jul 15 00:07:13.597: INFO: Pod pod-subpath-test-configmap-cfgz no longer exists STEP: Deleting pod pod-subpath-test-configmap-cfgz Jul 15 00:07:13.597: INFO: Deleting pod "pod-subpath-test-configmap-cfgz" in namespace "subpath-5387" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:13.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5387" for this suite. • [SLOW TEST:24.279 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":294,"completed":112,"skipped":1725,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:13.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:18.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1679" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":294,"completed":113,"skipped":1745,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:18.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:07:19.283: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:07:21.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368439, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368439, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368439, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368439, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:07:24.331: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:24.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7517" for this suite. STEP: Destroying namespace "webhook-7517-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.928 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":294,"completed":114,"skipped":1766,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:24.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jul 15 00:07:24.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config api-versions' Jul 15 00:07:24.837: INFO: stderr: "" Jul 15 00:07:24.837: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:24.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8473" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":294,"completed":115,"skipped":1787,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:24.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jul 15 00:07:24.977: INFO: Waiting up to 5m0s for pod "client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d" in namespace "containers-7820" to be "Succeeded or Failed" Jul 15 00:07:25.017: INFO: Pod "client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.378794ms Jul 15 00:07:27.021: INFO: Pod "client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0438816s Jul 15 00:07:29.026: INFO: Pod "client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048503256s STEP: Saw pod success Jul 15 00:07:29.026: INFO: Pod "client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d" satisfied condition "Succeeded or Failed" Jul 15 00:07:29.030: INFO: Trying to get logs from node latest-worker pod client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d container test-container: STEP: delete the pod Jul 15 00:07:29.076: INFO: Waiting for pod client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d to disappear Jul 15 00:07:29.083: INFO: Pod client-containers-821faf98-99e9-43bc-ac3d-63bfd521b39d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7820" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":294,"completed":116,"skipped":1799,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:29.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-39f078bb-a262-44e7-a16d-03bac66c0ffe STEP: Creating a pod to test consume secrets Jul 15 00:07:29.287: INFO: Waiting up to 5m0s for pod "pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54" in namespace "secrets-7481" to be "Succeeded or Failed" Jul 15 00:07:29.291: INFO: Pod "pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546418ms Jul 15 00:07:31.295: INFO: Pod "pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007844351s Jul 15 00:07:33.299: INFO: Pod "pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011815298s STEP: Saw pod success Jul 15 00:07:33.299: INFO: Pod "pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54" satisfied condition "Succeeded or Failed" Jul 15 00:07:33.302: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54 container secret-volume-test: STEP: delete the pod Jul 15 00:07:33.384: INFO: Waiting for pod pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54 to disappear Jul 15 00:07:33.443: INFO: Pod pod-secrets-c04e75bd-4d4b-4120-9317-1d0acf511f54 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:33.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7481" for this suite. STEP: Destroying namespace "secret-namespace-8894" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":294,"completed":117,"skipped":1803,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:33.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:50.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7817" for this suite. • [SLOW TEST:17.282 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":294,"completed":118,"skipped":1839,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:50.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 15 00:07:55.440: INFO: Successfully updated pod "annotationupdate959a5e58-bab1-4cd2-bf98-a3896d9aadf6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:07:59.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7569" for this suite. • [SLOW TEST:8.692 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":119,"skipped":1839,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:07:59.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3f93faf3-ff8e-40ad-bffe-8f184ae41f53 STEP: Creating a pod to test consume configMaps Jul 15 00:07:59.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1" in namespace "configmap-8918" to be "Succeeded or Failed" Jul 15 00:07:59.608: INFO: Pod "pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.748025ms Jul 15 00:08:01.639: INFO: Pod "pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051572816s Jul 15 00:08:03.644: INFO: Pod "pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055844367s STEP: Saw pod success Jul 15 00:08:03.644: INFO: Pod "pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1" satisfied condition "Succeeded or Failed" Jul 15 00:08:03.647: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1 container configmap-volume-test: STEP: delete the pod Jul 15 00:08:03.730: INFO: Waiting for pod pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1 to disappear Jul 15 00:08:03.735: INFO: Pod pod-configmaps-1a1ed02e-5587-4220-abca-19a5a55e33e1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:03.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8918" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":120,"skipped":1839,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:03.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jul 15 00:08:03.839: INFO: Waiting up to 5m0s for pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624" in namespace "emptydir-1089" to be "Succeeded or Failed" Jul 15 00:08:03.864: INFO: Pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624": Phase="Pending", Reason="", readiness=false. Elapsed: 25.052738ms Jul 15 00:08:05.868: INFO: Pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028685331s Jul 15 00:08:07.871: INFO: Pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624": Phase="Running", Reason="", readiness=true. Elapsed: 4.031957272s Jul 15 00:08:09.875: INFO: Pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035729698s STEP: Saw pod success Jul 15 00:08:09.875: INFO: Pod "pod-488db6c9-0956-4832-8d2d-a541b30cb624" satisfied condition "Succeeded or Failed" Jul 15 00:08:09.878: INFO: Trying to get logs from node latest-worker pod pod-488db6c9-0956-4832-8d2d-a541b30cb624 container test-container: STEP: delete the pod Jul 15 00:08:09.952: INFO: Waiting for pod pod-488db6c9-0956-4832-8d2d-a541b30cb624 to disappear Jul 15 00:08:09.957: INFO: Pod pod-488db6c9-0956-4832-8d2d-a541b30cb624 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:09.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1089" for this suite. • [SLOW TEST:6.222 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":121,"skipped":1840,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:09.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jul 15 00:08:10.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2585' Jul 15 00:08:13.445: INFO: stderr: "" Jul 15 00:08:13.445: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 15 00:08:14.449: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:14.449: INFO: Found 0 / 1 Jul 15 00:08:15.678: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:15.678: INFO: Found 0 / 1 Jul 15 00:08:16.449: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:16.449: INFO: Found 0 / 1 Jul 15 00:08:17.450: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:17.450: INFO: Found 1 / 1 Jul 15 00:08:17.450: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 15 00:08:17.453: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:17.453: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 15 00:08:17.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config patch pod agnhost-primary-5pcdw --namespace=kubectl-2585 -p {"metadata":{"annotations":{"x":"y"}}}' Jul 15 00:08:17.572: INFO: stderr: "" Jul 15 00:08:17.572: INFO: stdout: "pod/agnhost-primary-5pcdw patched\n" STEP: checking annotations Jul 15 00:08:17.590: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:08:17.590: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:17.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2585" for this suite. • [SLOW TEST:7.631 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1485 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":294,"completed":122,"skipped":1846,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:17.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:08:18.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:08:20.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368498, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368498, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368498, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368498, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:08:23.275: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:08:23.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:24.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5983" for this suite. STEP: Destroying namespace "webhook-5983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.085 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":294,"completed":123,"skipped":1848,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:24.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jul 15 00:08:24.787: INFO: Waiting up to 5m0s for pod "var-expansion-2ac7c88e-7933-4048-b861-63935883023e" in namespace "var-expansion-1956" to be "Succeeded or Failed" Jul 15 00:08:24.791: INFO: Pod "var-expansion-2ac7c88e-7933-4048-b861-63935883023e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598107ms Jul 15 00:08:26.964: INFO: Pod "var-expansion-2ac7c88e-7933-4048-b861-63935883023e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176636915s Jul 15 00:08:28.968: INFO: Pod "var-expansion-2ac7c88e-7933-4048-b861-63935883023e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180539082s STEP: Saw pod success Jul 15 00:08:28.968: INFO: Pod "var-expansion-2ac7c88e-7933-4048-b861-63935883023e" satisfied condition "Succeeded or Failed" Jul 15 00:08:28.970: INFO: Trying to get logs from node latest-worker pod var-expansion-2ac7c88e-7933-4048-b861-63935883023e container dapi-container: STEP: delete the pod Jul 15 00:08:29.037: INFO: Waiting for pod var-expansion-2ac7c88e-7933-4048-b861-63935883023e to disappear Jul 15 00:08:29.048: INFO: Pod var-expansion-2ac7c88e-7933-4048-b861-63935883023e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:29.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1956" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":294,"completed":124,"skipped":1868,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:29.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 15 00:08:33.767: INFO: Successfully updated pod "annotationupdateb2c1c1a4-e173-4ae3-9ba7-99bac79a29c7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:35.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7308" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":125,"skipped":1884,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:35.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 15 00:08:35.854: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:08:50.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4622" for this suite. • [SLOW TEST:14.839 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":294,"completed":126,"skipped":1884,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:08:50.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:08:50.689: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1987 I0715 00:08:50.711130 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1987, replica count: 1 I0715 00:08:51.761447 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:08:52.761680 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:08:53.761918 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:08:54.762064 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:08:54.896: INFO: Created: latency-svc-zcqbz Jul 15 00:08:54.907: INFO: Got endpoints: latency-svc-zcqbz [45.053867ms] Jul 15 00:08:54.988: INFO: Created: latency-svc-gvnwl Jul 15 00:08:55.018: INFO: Got endpoints: latency-svc-gvnwl [111.022326ms] Jul 15 00:08:55.054: INFO: Created: latency-svc-rphzp Jul 15 00:08:55.069: INFO: Got endpoints: latency-svc-rphzp [162.405851ms] Jul 15 00:08:55.087: INFO: Created: latency-svc-h8fkq Jul 15 00:08:55.156: INFO: Got endpoints: latency-svc-h8fkq [249.05808ms] Jul 15 00:08:55.158: INFO: Created: latency-svc-mqzfh Jul 15 00:08:55.172: INFO: Got endpoints: latency-svc-mqzfh [264.183578ms] Jul 15 00:08:55.189: INFO: Created: latency-svc-b89sp Jul 15 00:08:55.207: INFO: Got endpoints: latency-svc-b89sp [299.879009ms] Jul 15 00:08:55.227: INFO: Created: latency-svc-fb64k Jul 15 00:08:55.238: INFO: Got endpoints: latency-svc-fb64k [330.575207ms] Jul 15 00:08:55.312: INFO: Created: latency-svc-z2g6x Jul 15 00:08:55.328: INFO: Got endpoints: latency-svc-z2g6x [420.098951ms] Jul 15 00:08:55.372: INFO: Created: latency-svc-dcpbw Jul 15 00:08:55.388: INFO: Got endpoints: latency-svc-dcpbw [480.460126ms] Jul 15 00:08:55.405: INFO: Created: latency-svc-mvq2f Jul 15 00:08:55.462: INFO: Got endpoints: latency-svc-mvq2f [553.977935ms] Jul 15 00:08:55.463: INFO: Created: latency-svc-n6hsc Jul 15 00:08:55.492: INFO: Got endpoints: latency-svc-n6hsc [584.56022ms] Jul 15 00:08:55.534: INFO: Created: latency-svc-d46qt Jul 15 00:08:55.551: INFO: Got endpoints: latency-svc-d46qt [643.736683ms] Jul 15 00:08:55.593: INFO: Created: latency-svc-b5v4r Jul 15 00:08:55.616: INFO: Got endpoints: latency-svc-b5v4r [708.431542ms] Jul 15 00:08:55.663: INFO: Created: latency-svc-zwc9d Jul 15 00:08:55.689: INFO: Got endpoints: latency-svc-zwc9d [782.148383ms] Jul 15 00:08:55.754: INFO: Created: latency-svc-xmxdh Jul 15 00:08:55.774: INFO: Got endpoints: latency-svc-xmxdh [867.121693ms] Jul 15 00:08:55.832: INFO: Created: latency-svc-8lcsz Jul 15 00:08:55.852: INFO: Got endpoints: latency-svc-8lcsz [944.803848ms] Jul 15 00:08:55.898: INFO: Created: latency-svc-dp6fh Jul 15 00:08:55.906: INFO: Got endpoints: latency-svc-dp6fh [887.065199ms] Jul 15 00:08:55.939: INFO: Created: latency-svc-xq5p8 Jul 15 00:08:55.948: INFO: Got endpoints: latency-svc-xq5p8 [878.646159ms] Jul 15 00:08:55.990: INFO: Created: latency-svc-5zz2l Jul 15 00:08:56.054: INFO: Got endpoints: latency-svc-5zz2l [898.045392ms] Jul 15 00:08:56.057: INFO: Created: latency-svc-sdsln Jul 15 00:08:56.062: INFO: Got endpoints: latency-svc-sdsln [890.471717ms] Jul 15 00:08:56.113: INFO: Created: latency-svc-w5slg Jul 15 00:08:56.129: INFO: Got endpoints: latency-svc-w5slg [922.282256ms] Jul 15 00:08:56.149: INFO: Created: latency-svc-gthd7 Jul 15 00:08:56.210: INFO: Got endpoints: latency-svc-gthd7 [972.197011ms] Jul 15 00:08:56.213: INFO: Created: latency-svc-9jc85 Jul 15 00:08:56.219: INFO: Got endpoints: latency-svc-9jc85 [891.704272ms] Jul 15 00:08:56.242: INFO: Created: latency-svc-mjjsf Jul 15 00:08:56.269: INFO: Got endpoints: latency-svc-mjjsf [881.507789ms] Jul 15 00:08:56.359: INFO: Created: latency-svc-csgsr Jul 15 00:08:56.370: INFO: Got endpoints: latency-svc-csgsr [908.548129ms] Jul 15 00:08:56.396: INFO: Created: latency-svc-prwxz Jul 15 00:08:56.428: INFO: Got endpoints: latency-svc-prwxz [936.493695ms] Jul 15 00:08:56.457: INFO: Created: latency-svc-vkmfm Jul 15 00:08:56.509: INFO: Got endpoints: latency-svc-vkmfm [957.612209ms] Jul 15 00:08:56.551: INFO: Created: latency-svc-qkzdf Jul 15 00:08:56.671: INFO: Got endpoints: latency-svc-qkzdf [1.055361028s] Jul 15 00:08:56.708: INFO: Created: latency-svc-fd5dv Jul 15 00:08:56.734: INFO: Got endpoints: latency-svc-fd5dv [1.044725007s] Jul 15 00:08:56.842: INFO: Created: latency-svc-hjwkg Jul 15 00:08:56.849: INFO: Got endpoints: latency-svc-hjwkg [1.074521597s] Jul 15 00:08:56.881: INFO: Created: latency-svc-7sfbh Jul 15 00:08:56.897: INFO: Got endpoints: latency-svc-7sfbh [1.045153354s] Jul 15 00:08:56.917: INFO: Created: latency-svc-pp87r Jul 15 00:08:56.988: INFO: Got endpoints: latency-svc-pp87r [1.081926579s] Jul 15 00:08:56.997: INFO: Created: latency-svc-bswq7 Jul 15 00:08:57.012: INFO: Got endpoints: latency-svc-bswq7 [1.06340098s] Jul 15 00:08:57.029: INFO: Created: latency-svc-jg528 Jul 15 00:08:57.042: INFO: Got endpoints: latency-svc-jg528 [987.350109ms] Jul 15 00:08:57.079: INFO: Created: latency-svc-7h5mx Jul 15 00:08:57.132: INFO: Got endpoints: latency-svc-7h5mx [1.069731842s] Jul 15 00:08:57.145: INFO: Created: latency-svc-9mm79 Jul 15 00:08:57.165: INFO: Got endpoints: latency-svc-9mm79 [1.035361124s] Jul 15 00:08:57.189: INFO: Created: latency-svc-9lz6c Jul 15 00:08:57.205: INFO: Got endpoints: latency-svc-9lz6c [994.882062ms] Jul 15 00:08:57.287: INFO: Created: latency-svc-stpkl Jul 15 00:08:57.296: INFO: Got endpoints: latency-svc-stpkl [1.076178459s] Jul 15 00:08:57.349: INFO: Created: latency-svc-qd72l Jul 15 00:08:57.385: INFO: Got endpoints: latency-svc-qd72l [1.11615458s] Jul 15 00:08:57.453: INFO: Created: latency-svc-znhnk Jul 15 00:08:57.482: INFO: Got endpoints: latency-svc-znhnk [1.111489602s] Jul 15 00:08:57.508: INFO: Created: latency-svc-9j8fz Jul 15 00:08:57.518: INFO: Got endpoints: latency-svc-9j8fz [1.089210655s] Jul 15 00:08:57.581: INFO: Created: latency-svc-t8q9r Jul 15 00:08:57.590: INFO: Got endpoints: latency-svc-t8q9r [1.08125749s] Jul 15 00:08:57.625: INFO: Created: latency-svc-t6r2g Jul 15 00:08:57.651: INFO: Got endpoints: latency-svc-t6r2g [979.460676ms] Jul 15 00:08:57.669: INFO: Created: latency-svc-sfcdb Jul 15 00:08:57.737: INFO: Got endpoints: latency-svc-sfcdb [1.002279582s] Jul 15 00:08:57.754: INFO: Created: latency-svc-w4s5d Jul 15 00:08:57.781: INFO: Got endpoints: latency-svc-w4s5d [931.990542ms] Jul 15 00:08:57.781: INFO: Created: latency-svc-zhsjd Jul 15 00:08:57.795: INFO: Got endpoints: latency-svc-zhsjd [898.338699ms] Jul 15 00:08:57.885: INFO: Created: latency-svc-tvvhl Jul 15 00:08:57.898: INFO: Got endpoints: latency-svc-tvvhl [910.055216ms] Jul 15 00:08:57.915: INFO: Created: latency-svc-kcrdl Jul 15 00:08:57.939: INFO: Got endpoints: latency-svc-kcrdl [927.749411ms] Jul 15 00:08:57.970: INFO: Created: latency-svc-sxktm Jul 15 00:08:58.012: INFO: Got endpoints: latency-svc-sxktm [970.293371ms] Jul 15 00:08:58.027: INFO: Created: latency-svc-65pnc Jul 15 00:08:58.043: INFO: Got endpoints: latency-svc-65pnc [910.88148ms] Jul 15 00:08:58.087: INFO: Created: latency-svc-xcxxg Jul 15 00:08:58.103: INFO: Got endpoints: latency-svc-xcxxg [938.216541ms] Jul 15 00:08:58.155: INFO: Created: latency-svc-rbwcq Jul 15 00:08:58.163: INFO: Got endpoints: latency-svc-rbwcq [958.426995ms] Jul 15 00:08:58.186: INFO: Created: latency-svc-kpntv Jul 15 00:08:58.200: INFO: Got endpoints: latency-svc-kpntv [904.433194ms] Jul 15 00:08:58.225: INFO: Created: latency-svc-5l9tk Jul 15 00:08:58.243: INFO: Got endpoints: latency-svc-5l9tk [856.999863ms] Jul 15 00:08:58.354: INFO: Created: latency-svc-g2df2 Jul 15 00:08:58.384: INFO: Got endpoints: latency-svc-g2df2 [183.523087ms] Jul 15 00:08:58.443: INFO: Created: latency-svc-wtslq Jul 15 00:08:58.453: INFO: Got endpoints: latency-svc-wtslq [971.283203ms] Jul 15 00:08:58.475: INFO: Created: latency-svc-bkdpq Jul 15 00:08:58.501: INFO: Got endpoints: latency-svc-bkdpq [983.296103ms] Jul 15 00:08:58.525: INFO: Created: latency-svc-sg49j Jul 15 00:08:58.537: INFO: Got endpoints: latency-svc-sg49j [946.849669ms] Jul 15 00:08:58.594: INFO: Created: latency-svc-m77tz Jul 15 00:08:58.624: INFO: Got endpoints: latency-svc-m77tz [972.995641ms] Jul 15 00:08:58.654: INFO: Created: latency-svc-bmpbb Jul 15 00:08:58.761: INFO: Got endpoints: latency-svc-bmpbb [1.024476768s] Jul 15 00:08:58.764: INFO: Created: latency-svc-xkdpm Jul 15 00:08:58.777: INFO: Got endpoints: latency-svc-xkdpm [996.245606ms] Jul 15 00:08:58.855: INFO: Created: latency-svc-56nq2 Jul 15 00:08:58.934: INFO: Got endpoints: latency-svc-56nq2 [1.138976238s] Jul 15 00:08:58.945: INFO: Created: latency-svc-kpwr2 Jul 15 00:08:58.958: INFO: Got endpoints: latency-svc-kpwr2 [1.060064345s] Jul 15 00:08:59.020: INFO: Created: latency-svc-hxhpv Jul 15 00:08:59.095: INFO: Got endpoints: latency-svc-hxhpv [1.156040599s] Jul 15 00:08:59.131: INFO: Created: latency-svc-clxvk Jul 15 00:08:59.145: INFO: Got endpoints: latency-svc-clxvk [1.132882072s] Jul 15 00:08:59.187: INFO: Created: latency-svc-cjhds Jul 15 00:08:59.245: INFO: Got endpoints: latency-svc-cjhds [1.20232699s] Jul 15 00:08:59.438: INFO: Created: latency-svc-s5n2c Jul 15 00:08:59.442: INFO: Got endpoints: latency-svc-s5n2c [1.33867088s] Jul 15 00:08:59.605: INFO: Created: latency-svc-lwmzc Jul 15 00:08:59.619: INFO: Got endpoints: latency-svc-lwmzc [1.455629106s] Jul 15 00:08:59.637: INFO: Created: latency-svc-54bns Jul 15 00:08:59.649: INFO: Got endpoints: latency-svc-54bns [1.406445927s] Jul 15 00:08:59.668: INFO: Created: latency-svc-tsbx4 Jul 15 00:08:59.680: INFO: Got endpoints: latency-svc-tsbx4 [1.295863152s] Jul 15 00:08:59.737: INFO: Created: latency-svc-z9rcp Jul 15 00:08:59.752: INFO: Got endpoints: latency-svc-z9rcp [1.298694888s] Jul 15 00:08:59.779: INFO: Created: latency-svc-pglfb Jul 15 00:08:59.788: INFO: Got endpoints: latency-svc-pglfb [1.286667845s] Jul 15 00:08:59.809: INFO: Created: latency-svc-99f74 Jul 15 00:08:59.818: INFO: Got endpoints: latency-svc-99f74 [1.28073755s] Jul 15 00:08:59.882: INFO: Created: latency-svc-vq7gj Jul 15 00:08:59.894: INFO: Got endpoints: latency-svc-vq7gj [1.269868522s] Jul 15 00:08:59.913: INFO: Created: latency-svc-5dkvm Jul 15 00:08:59.927: INFO: Got endpoints: latency-svc-5dkvm [1.165957279s] Jul 15 00:08:59.946: INFO: Created: latency-svc-tb24r Jul 15 00:08:59.963: INFO: Got endpoints: latency-svc-tb24r [1.18613397s] Jul 15 00:09:00.024: INFO: Created: latency-svc-bhjkv Jul 15 00:09:00.028: INFO: Got endpoints: latency-svc-bhjkv [1.093031202s] Jul 15 00:09:00.081: INFO: Created: latency-svc-5f8k4 Jul 15 00:09:00.096: INFO: Got endpoints: latency-svc-5f8k4 [1.137915018s] Jul 15 00:09:00.169: INFO: Created: latency-svc-pjwjz Jul 15 00:09:00.193: INFO: Got endpoints: latency-svc-pjwjz [1.097198251s] Jul 15 00:09:00.217: INFO: Created: latency-svc-xwmwg Jul 15 00:09:00.228: INFO: Got endpoints: latency-svc-xwmwg [1.083493007s] Jul 15 00:09:00.306: INFO: Created: latency-svc-8kgr7 Jul 15 00:09:00.313: INFO: Got endpoints: latency-svc-8kgr7 [1.067757188s] Jul 15 00:09:00.351: INFO: Created: latency-svc-wpjl2 Jul 15 00:09:00.367: INFO: Got endpoints: latency-svc-wpjl2 [925.260302ms] Jul 15 00:09:00.393: INFO: Created: latency-svc-875n9 Jul 15 00:09:00.437: INFO: Got endpoints: latency-svc-875n9 [818.051919ms] Jul 15 00:09:00.450: INFO: Created: latency-svc-jrf4w Jul 15 00:09:00.464: INFO: Got endpoints: latency-svc-jrf4w [814.470389ms] Jul 15 00:09:00.487: INFO: Created: latency-svc-hv86f Jul 15 00:09:00.500: INFO: Got endpoints: latency-svc-hv86f [820.244068ms] Jul 15 00:09:00.518: INFO: Created: latency-svc-p7zzv Jul 15 00:09:00.530: INFO: Got endpoints: latency-svc-p7zzv [778.431395ms] Jul 15 00:09:00.587: INFO: Created: latency-svc-6rcmg Jul 15 00:09:00.619: INFO: Got endpoints: latency-svc-6rcmg [831.474019ms] Jul 15 00:09:00.666: INFO: Created: latency-svc-dt8p9 Jul 15 00:09:00.773: INFO: Got endpoints: latency-svc-dt8p9 [954.813319ms] Jul 15 00:09:00.825: INFO: Created: latency-svc-qgz25 Jul 15 00:09:00.862: INFO: Got endpoints: latency-svc-qgz25 [968.568173ms] Jul 15 00:09:00.922: INFO: Created: latency-svc-5lvxn Jul 15 00:09:00.953: INFO: Got endpoints: latency-svc-5lvxn [1.025648422s] Jul 15 00:09:00.997: INFO: Created: latency-svc-lfl7c Jul 15 00:09:01.096: INFO: Got endpoints: latency-svc-lfl7c [1.132248904s] Jul 15 00:09:01.098: INFO: Created: latency-svc-c4xpx Jul 15 00:09:01.133: INFO: Got endpoints: latency-svc-c4xpx [1.105718636s] Jul 15 00:09:01.177: INFO: Created: latency-svc-m2nmq Jul 15 00:09:01.252: INFO: Got endpoints: latency-svc-m2nmq [1.156099763s] Jul 15 00:09:01.267: INFO: Created: latency-svc-pcrb6 Jul 15 00:09:01.284: INFO: Got endpoints: latency-svc-pcrb6 [1.091063253s] Jul 15 00:09:01.305: INFO: Created: latency-svc-r8gzn Jul 15 00:09:01.320: INFO: Got endpoints: latency-svc-r8gzn [1.092034263s] Jul 15 00:09:01.341: INFO: Created: latency-svc-wm64m Jul 15 00:09:01.419: INFO: Got endpoints: latency-svc-wm64m [1.106291469s] Jul 15 00:09:01.477: INFO: Created: latency-svc-wjtf5 Jul 15 00:09:01.489: INFO: Got endpoints: latency-svc-wjtf5 [1.121852194s] Jul 15 00:09:01.587: INFO: Created: latency-svc-ncd4p Jul 15 00:09:01.647: INFO: Got endpoints: latency-svc-ncd4p [1.21006198s] Jul 15 00:09:01.681: INFO: Created: latency-svc-8dxxf Jul 15 00:09:01.749: INFO: Got endpoints: latency-svc-8dxxf [1.285204394s] Jul 15 00:09:01.750: INFO: Created: latency-svc-bckzj Jul 15 00:09:01.771: INFO: Got endpoints: latency-svc-bckzj [1.271557292s] Jul 15 00:09:01.828: INFO: Created: latency-svc-ct8gr Jul 15 00:09:01.844: INFO: Got endpoints: latency-svc-ct8gr [1.313423804s] Jul 15 00:09:01.898: INFO: Created: latency-svc-8grsg Jul 15 00:09:01.915: INFO: Got endpoints: latency-svc-8grsg [1.295639891s] Jul 15 00:09:01.948: INFO: Created: latency-svc-g88mc Jul 15 00:09:01.971: INFO: Got endpoints: latency-svc-g88mc [1.198195343s] Jul 15 00:09:01.996: INFO: Created: latency-svc-5bd9m Jul 15 00:09:02.042: INFO: Got endpoints: latency-svc-5bd9m [1.179382308s] Jul 15 00:09:02.077: INFO: Created: latency-svc-gznmd Jul 15 00:09:02.091: INFO: Got endpoints: latency-svc-gznmd [1.138074464s] Jul 15 00:09:02.139: INFO: Created: latency-svc-9mmr6 Jul 15 00:09:02.204: INFO: Got endpoints: latency-svc-9mmr6 [1.10814826s] Jul 15 00:09:02.218: INFO: Created: latency-svc-p2dtr Jul 15 00:09:02.236: INFO: Got endpoints: latency-svc-p2dtr [1.102267095s] Jul 15 00:09:02.293: INFO: Created: latency-svc-b8mzg Jul 15 00:09:02.365: INFO: Got endpoints: latency-svc-b8mzg [1.113391906s] Jul 15 00:09:02.392: INFO: Created: latency-svc-t9pbq Jul 15 00:09:02.417: INFO: Got endpoints: latency-svc-t9pbq [1.133277824s] Jul 15 00:09:02.455: INFO: Created: latency-svc-vlf9c Jul 15 00:09:02.504: INFO: Got endpoints: latency-svc-vlf9c [1.183072189s] Jul 15 00:09:02.521: INFO: Created: latency-svc-d6xmj Jul 15 00:09:02.537: INFO: Got endpoints: latency-svc-d6xmj [1.117872335s] Jul 15 00:09:02.557: INFO: Created: latency-svc-8n78z Jul 15 00:09:02.574: INFO: Got endpoints: latency-svc-8n78z [1.084575025s] Jul 15 00:09:02.589: INFO: Created: latency-svc-ncj7m Jul 15 00:09:02.665: INFO: Got endpoints: latency-svc-ncj7m [1.017360635s] Jul 15 00:09:02.667: INFO: Created: latency-svc-m7c74 Jul 15 00:09:02.681: INFO: Got endpoints: latency-svc-m7c74 [932.343013ms] Jul 15 00:09:02.707: INFO: Created: latency-svc-kjfch Jul 15 00:09:02.749: INFO: Got endpoints: latency-svc-kjfch [977.245395ms] Jul 15 00:09:02.817: INFO: Created: latency-svc-qkfhm Jul 15 00:09:02.851: INFO: Got endpoints: latency-svc-qkfhm [1.006829306s] Jul 15 00:09:02.883: INFO: Created: latency-svc-wf6l9 Jul 15 00:09:02.953: INFO: Got endpoints: latency-svc-wf6l9 [1.037543439s] Jul 15 00:09:02.965: INFO: Created: latency-svc-k2bx6 Jul 15 00:09:02.980: INFO: Got endpoints: latency-svc-k2bx6 [1.009443384s] Jul 15 00:09:03.001: INFO: Created: latency-svc-wvkqj Jul 15 00:09:03.021: INFO: Got endpoints: latency-svc-wvkqj [979.402752ms] Jul 15 00:09:03.051: INFO: Created: latency-svc-5s65j Jul 15 00:09:03.120: INFO: Got endpoints: latency-svc-5s65j [1.028622501s] Jul 15 00:09:03.123: INFO: Created: latency-svc-ksnx7 Jul 15 00:09:03.151: INFO: Got endpoints: latency-svc-ksnx7 [946.931921ms] Jul 15 00:09:03.194: INFO: Created: latency-svc-hvzpb Jul 15 00:09:03.203: INFO: Got endpoints: latency-svc-hvzpb [967.285592ms] Jul 15 00:09:03.263: INFO: Created: latency-svc-9whjp Jul 15 00:09:03.291: INFO: Got endpoints: latency-svc-9whjp [925.794621ms] Jul 15 00:09:03.322: INFO: Created: latency-svc-lzc99 Jul 15 00:09:03.336: INFO: Got endpoints: latency-svc-lzc99 [918.900175ms] Jul 15 00:09:03.413: INFO: Created: latency-svc-jrfgs Jul 15 00:09:03.426: INFO: Got endpoints: latency-svc-jrfgs [922.74646ms] Jul 15 00:09:03.457: INFO: Created: latency-svc-s6cfc Jul 15 00:09:03.468: INFO: Got endpoints: latency-svc-s6cfc [930.935989ms] Jul 15 00:09:03.489: INFO: Created: latency-svc-t52xz Jul 15 00:09:03.569: INFO: Got endpoints: latency-svc-t52xz [994.919829ms] Jul 15 00:09:03.574: INFO: Created: latency-svc-65tsf Jul 15 00:09:03.590: INFO: Got endpoints: latency-svc-65tsf [924.864283ms] Jul 15 00:09:03.637: INFO: Created: latency-svc-gxws2 Jul 15 00:09:03.667: INFO: Got endpoints: latency-svc-gxws2 [985.502176ms] Jul 15 00:09:03.725: INFO: Created: latency-svc-rmvmd Jul 15 00:09:03.747: INFO: Got endpoints: latency-svc-rmvmd [998.163256ms] Jul 15 00:09:03.749: INFO: Created: latency-svc-jd8w6 Jul 15 00:09:03.778: INFO: Got endpoints: latency-svc-jd8w6 [927.628002ms] Jul 15 00:09:03.823: INFO: Created: latency-svc-5nbf9 Jul 15 00:09:03.875: INFO: Got endpoints: latency-svc-5nbf9 [922.367645ms] Jul 15 00:09:03.888: INFO: Created: latency-svc-2kkj6 Jul 15 00:09:03.903: INFO: Got endpoints: latency-svc-2kkj6 [922.1113ms] Jul 15 00:09:03.963: INFO: Created: latency-svc-bjb6k Jul 15 00:09:04.012: INFO: Got endpoints: latency-svc-bjb6k [990.857376ms] Jul 15 00:09:04.047: INFO: Created: latency-svc-bm6qs Jul 15 00:09:04.102: INFO: Got endpoints: latency-svc-bm6qs [982.53639ms] Jul 15 00:09:04.198: INFO: Created: latency-svc-9dc7b Jul 15 00:09:04.233: INFO: Got endpoints: latency-svc-9dc7b [1.081991257s] Jul 15 00:09:04.294: INFO: Created: latency-svc-4fz8t Jul 15 00:09:04.383: INFO: Got endpoints: latency-svc-4fz8t [1.180358923s] Jul 15 00:09:04.406: INFO: Created: latency-svc-v4rt5 Jul 15 00:09:04.448: INFO: Got endpoints: latency-svc-v4rt5 [1.156305126s] Jul 15 00:09:04.545: INFO: Created: latency-svc-5wzfh Jul 15 00:09:04.551: INFO: Got endpoints: latency-svc-5wzfh [1.214995359s] Jul 15 00:09:04.600: INFO: Created: latency-svc-wrd7z Jul 15 00:09:04.613: INFO: Got endpoints: latency-svc-wrd7z [1.18632858s] Jul 15 00:09:04.641: INFO: Created: latency-svc-trj7p Jul 15 00:09:04.693: INFO: Got endpoints: latency-svc-trj7p [1.224756036s] Jul 15 00:09:04.750: INFO: Created: latency-svc-gbfs2 Jul 15 00:09:04.763: INFO: Got endpoints: latency-svc-gbfs2 [1.194294776s] Jul 15 00:09:04.826: INFO: Created: latency-svc-7gqqw Jul 15 00:09:04.855: INFO: Got endpoints: latency-svc-7gqqw [1.265245181s] Jul 15 00:09:04.891: INFO: Created: latency-svc-lktd6 Jul 15 00:09:04.902: INFO: Got endpoints: latency-svc-lktd6 [1.234827987s] Jul 15 00:09:04.976: INFO: Created: latency-svc-bdhwm Jul 15 00:09:04.989: INFO: Got endpoints: latency-svc-bdhwm [1.241858669s] Jul 15 00:09:05.026: INFO: Created: latency-svc-8l4xb Jul 15 00:09:05.040: INFO: Got endpoints: latency-svc-8l4xb [1.261594773s] Jul 15 00:09:05.120: INFO: Created: latency-svc-q9t75 Jul 15 00:09:05.155: INFO: Got endpoints: latency-svc-q9t75 [1.280028155s] Jul 15 00:09:05.205: INFO: Created: latency-svc-29lt2 Jul 15 00:09:05.275: INFO: Got endpoints: latency-svc-29lt2 [1.372293727s] Jul 15 00:09:05.276: INFO: Created: latency-svc-zl9z4 Jul 15 00:09:05.299: INFO: Got endpoints: latency-svc-zl9z4 [1.286221884s] Jul 15 00:09:05.329: INFO: Created: latency-svc-sxbk8 Jul 15 00:09:05.344: INFO: Got endpoints: latency-svc-sxbk8 [1.242023951s] Jul 15 00:09:05.371: INFO: Created: latency-svc-crl9v Jul 15 00:09:05.425: INFO: Got endpoints: latency-svc-crl9v [1.192131542s] Jul 15 00:09:05.451: INFO: Created: latency-svc-fcwrj Jul 15 00:09:05.465: INFO: Got endpoints: latency-svc-fcwrj [1.081205141s] Jul 15 00:09:05.493: INFO: Created: latency-svc-8t2l2 Jul 15 00:09:05.587: INFO: Got endpoints: latency-svc-8t2l2 [1.139275023s] Jul 15 00:09:05.598: INFO: Created: latency-svc-bz6gz Jul 15 00:09:05.627: INFO: Got endpoints: latency-svc-bz6gz [1.075939753s] Jul 15 00:09:05.647: INFO: Created: latency-svc-xvwcg Jul 15 00:09:05.679: INFO: Got endpoints: latency-svc-xvwcg [1.066214623s] Jul 15 00:09:05.737: INFO: Created: latency-svc-5l5cp Jul 15 00:09:05.760: INFO: Got endpoints: latency-svc-5l5cp [1.066671574s] Jul 15 00:09:05.790: INFO: Created: latency-svc-4x4b8 Jul 15 00:09:05.815: INFO: Got endpoints: latency-svc-4x4b8 [1.052083564s] Jul 15 00:09:05.887: INFO: Created: latency-svc-6z768 Jul 15 00:09:05.893: INFO: Got endpoints: latency-svc-6z768 [1.038306969s] Jul 15 00:09:05.925: INFO: Created: latency-svc-k9jrg Jul 15 00:09:05.955: INFO: Got endpoints: latency-svc-k9jrg [1.053490477s] Jul 15 00:09:05.979: INFO: Created: latency-svc-4hkml Jul 15 00:09:06.030: INFO: Got endpoints: latency-svc-4hkml [1.041065065s] Jul 15 00:09:06.048: INFO: Created: latency-svc-sf6vc Jul 15 00:09:06.063: INFO: Got endpoints: latency-svc-sf6vc [1.022863295s] Jul 15 00:09:06.090: INFO: Created: latency-svc-4dwtj Jul 15 00:09:06.116: INFO: Got endpoints: latency-svc-4dwtj [960.748182ms] Jul 15 00:09:06.174: INFO: Created: latency-svc-wj7pw Jul 15 00:09:06.182: INFO: Got endpoints: latency-svc-wj7pw [907.214239ms] Jul 15 00:09:06.201: INFO: Created: latency-svc-pc52p Jul 15 00:09:06.213: INFO: Got endpoints: latency-svc-pc52p [913.936128ms] Jul 15 00:09:06.237: INFO: Created: latency-svc-bsqqn Jul 15 00:09:06.249: INFO: Got endpoints: latency-svc-bsqqn [904.37071ms] Jul 15 00:09:06.319: INFO: Created: latency-svc-rvfjv Jul 15 00:09:06.346: INFO: Got endpoints: latency-svc-rvfjv [920.49192ms] Jul 15 00:09:06.375: INFO: Created: latency-svc-b7wrc Jul 15 00:09:06.399: INFO: Got endpoints: latency-svc-b7wrc [934.063346ms] Jul 15 00:09:06.461: INFO: Created: latency-svc-8bhnz Jul 15 00:09:06.472: INFO: Got endpoints: latency-svc-8bhnz [884.614651ms] Jul 15 00:09:06.493: INFO: Created: latency-svc-zm7d6 Jul 15 00:09:06.509: INFO: Got endpoints: latency-svc-zm7d6 [881.602404ms] Jul 15 00:09:06.531: INFO: Created: latency-svc-mwrlk Jul 15 00:09:06.545: INFO: Got endpoints: latency-svc-mwrlk [865.409198ms] Jul 15 00:09:06.599: INFO: Created: latency-svc-zbmlw Jul 15 00:09:06.611: INFO: Got endpoints: latency-svc-zbmlw [851.120169ms] Jul 15 00:09:06.633: INFO: Created: latency-svc-r9bx4 Jul 15 00:09:06.651: INFO: Got endpoints: latency-svc-r9bx4 [835.958635ms] Jul 15 00:09:06.682: INFO: Created: latency-svc-x8k98 Jul 15 00:09:06.755: INFO: Got endpoints: latency-svc-x8k98 [861.347048ms] Jul 15 00:09:06.781: INFO: Created: latency-svc-fksf4 Jul 15 00:09:06.804: INFO: Got endpoints: latency-svc-fksf4 [849.010098ms] Jul 15 00:09:06.837: INFO: Created: latency-svc-xdd7w Jul 15 00:09:06.852: INFO: Got endpoints: latency-svc-xdd7w [821.940025ms] Jul 15 00:09:06.946: INFO: Created: latency-svc-hqwfp Jul 15 00:09:06.949: INFO: Got endpoints: latency-svc-hqwfp [885.971682ms] Jul 15 00:09:06.996: INFO: Created: latency-svc-tgrf8 Jul 15 00:09:07.027: INFO: Got endpoints: latency-svc-tgrf8 [910.815556ms] Jul 15 00:09:07.096: INFO: Created: latency-svc-6tq2b Jul 15 00:09:07.137: INFO: Created: latency-svc-ftmn4 Jul 15 00:09:07.139: INFO: Got endpoints: latency-svc-6tq2b [957.173342ms] Jul 15 00:09:07.194: INFO: Got endpoints: latency-svc-ftmn4 [981.764229ms] Jul 15 00:09:07.254: INFO: Created: latency-svc-xdcbg Jul 15 00:09:07.263: INFO: Got endpoints: latency-svc-xdcbg [1.014311854s] Jul 15 00:09:07.280: INFO: Created: latency-svc-s77tk Jul 15 00:09:07.299: INFO: Got endpoints: latency-svc-s77tk [952.773732ms] Jul 15 00:09:07.328: INFO: Created: latency-svc-sl5d9 Jul 15 00:09:07.389: INFO: Got endpoints: latency-svc-sl5d9 [990.281016ms] Jul 15 00:09:07.403: INFO: Created: latency-svc-vww9m Jul 15 00:09:07.414: INFO: Got endpoints: latency-svc-vww9m [942.32256ms] Jul 15 00:09:07.434: INFO: Created: latency-svc-vmnlf Jul 15 00:09:07.438: INFO: Got endpoints: latency-svc-vmnlf [929.239825ms] Jul 15 00:09:07.461: INFO: Created: latency-svc-wfg5s Jul 15 00:09:07.475: INFO: Got endpoints: latency-svc-wfg5s [930.227771ms] Jul 15 00:09:07.527: INFO: Created: latency-svc-lbxth Jul 15 00:09:07.530: INFO: Got endpoints: latency-svc-lbxth [918.989377ms] Jul 15 00:09:07.585: INFO: Created: latency-svc-jmhtl Jul 15 00:09:07.601: INFO: Got endpoints: latency-svc-jmhtl [949.930668ms] Jul 15 00:09:07.672: INFO: Created: latency-svc-d6226 Jul 15 00:09:07.701: INFO: Got endpoints: latency-svc-d6226 [945.856658ms] Jul 15 00:09:07.702: INFO: Created: latency-svc-bcc2k Jul 15 00:09:07.717: INFO: Got endpoints: latency-svc-bcc2k [912.261739ms] Jul 15 00:09:07.737: INFO: Created: latency-svc-tf4lc Jul 15 00:09:07.752: INFO: Got endpoints: latency-svc-tf4lc [900.3466ms] Jul 15 00:09:07.806: INFO: Created: latency-svc-7nzdf Jul 15 00:09:07.830: INFO: Got endpoints: latency-svc-7nzdf [881.161668ms] Jul 15 00:09:07.866: INFO: Created: latency-svc-j5gzl Jul 15 00:09:07.879: INFO: Got endpoints: latency-svc-j5gzl [852.493783ms] Jul 15 00:09:07.940: INFO: Created: latency-svc-vlsmt Jul 15 00:09:07.952: INFO: Got endpoints: latency-svc-vlsmt [812.855776ms] Jul 15 00:09:07.995: INFO: Created: latency-svc-9zzcd Jul 15 00:09:08.011: INFO: Got endpoints: latency-svc-9zzcd [817.096207ms] Jul 15 00:09:08.084: INFO: Created: latency-svc-95qhd Jul 15 00:09:08.087: INFO: Got endpoints: latency-svc-95qhd [823.468084ms] Jul 15 00:09:08.112: INFO: Created: latency-svc-s9nct Jul 15 00:09:08.126: INFO: Got endpoints: latency-svc-s9nct [827.573367ms] Jul 15 00:09:08.148: INFO: Created: latency-svc-v2qfv Jul 15 00:09:08.162: INFO: Got endpoints: latency-svc-v2qfv [773.04294ms] Jul 15 00:09:08.180: INFO: Created: latency-svc-zhmqs Jul 15 00:09:08.210: INFO: Got endpoints: latency-svc-zhmqs [795.800133ms] Jul 15 00:09:08.229: INFO: Created: latency-svc-jzhmg Jul 15 00:09:08.247: INFO: Got endpoints: latency-svc-jzhmg [809.1812ms] Jul 15 00:09:08.280: INFO: Created: latency-svc-css56 Jul 15 00:09:08.296: INFO: Got endpoints: latency-svc-css56 [820.904864ms] Jul 15 00:09:08.365: INFO: Created: latency-svc-l66k9 Jul 15 00:09:08.382: INFO: Got endpoints: latency-svc-l66k9 [851.722758ms] Jul 15 00:09:08.382: INFO: Latencies: [111.022326ms 162.405851ms 183.523087ms 249.05808ms 264.183578ms 299.879009ms 330.575207ms 420.098951ms 480.460126ms 553.977935ms 584.56022ms 643.736683ms 708.431542ms 773.04294ms 778.431395ms 782.148383ms 795.800133ms 809.1812ms 812.855776ms 814.470389ms 817.096207ms 818.051919ms 820.244068ms 820.904864ms 821.940025ms 823.468084ms 827.573367ms 831.474019ms 835.958635ms 849.010098ms 851.120169ms 851.722758ms 852.493783ms 856.999863ms 861.347048ms 865.409198ms 867.121693ms 878.646159ms 881.161668ms 881.507789ms 881.602404ms 884.614651ms 885.971682ms 887.065199ms 890.471717ms 891.704272ms 898.045392ms 898.338699ms 900.3466ms 904.37071ms 904.433194ms 907.214239ms 908.548129ms 910.055216ms 910.815556ms 910.88148ms 912.261739ms 913.936128ms 918.900175ms 918.989377ms 920.49192ms 922.1113ms 922.282256ms 922.367645ms 922.74646ms 924.864283ms 925.260302ms 925.794621ms 927.628002ms 927.749411ms 929.239825ms 930.227771ms 930.935989ms 931.990542ms 932.343013ms 934.063346ms 936.493695ms 938.216541ms 942.32256ms 944.803848ms 945.856658ms 946.849669ms 946.931921ms 949.930668ms 952.773732ms 954.813319ms 957.173342ms 957.612209ms 958.426995ms 960.748182ms 967.285592ms 968.568173ms 970.293371ms 971.283203ms 972.197011ms 972.995641ms 977.245395ms 979.402752ms 979.460676ms 981.764229ms 982.53639ms 983.296103ms 985.502176ms 987.350109ms 990.281016ms 990.857376ms 994.882062ms 994.919829ms 996.245606ms 998.163256ms 1.002279582s 1.006829306s 1.009443384s 1.014311854s 1.017360635s 1.022863295s 1.024476768s 1.025648422s 1.028622501s 1.035361124s 1.037543439s 1.038306969s 1.041065065s 1.044725007s 1.045153354s 1.052083564s 1.053490477s 1.055361028s 1.060064345s 1.06340098s 1.066214623s 1.066671574s 1.067757188s 1.069731842s 1.074521597s 1.075939753s 1.076178459s 1.081205141s 1.08125749s 1.081926579s 1.081991257s 1.083493007s 1.084575025s 1.089210655s 1.091063253s 1.092034263s 1.093031202s 1.097198251s 1.102267095s 1.105718636s 1.106291469s 1.10814826s 1.111489602s 1.113391906s 1.11615458s 1.117872335s 1.121852194s 1.132248904s 1.132882072s 1.133277824s 1.137915018s 1.138074464s 1.138976238s 1.139275023s 1.156040599s 1.156099763s 1.156305126s 1.165957279s 1.179382308s 1.180358923s 1.183072189s 1.18613397s 1.18632858s 1.192131542s 1.194294776s 1.198195343s 1.20232699s 1.21006198s 1.214995359s 1.224756036s 1.234827987s 1.241858669s 1.242023951s 1.261594773s 1.265245181s 1.269868522s 1.271557292s 1.280028155s 1.28073755s 1.285204394s 1.286221884s 1.286667845s 1.295639891s 1.295863152s 1.298694888s 1.313423804s 1.33867088s 1.372293727s 1.406445927s 1.455629106s] Jul 15 00:09:08.382: INFO: 50 %ile: 982.53639ms Jul 15 00:09:08.382: INFO: 90 %ile: 1.234827987s Jul 15 00:09:08.382: INFO: 99 %ile: 1.406445927s Jul 15 00:09:08.382: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:09:08.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1987" for this suite. • [SLOW TEST:17.754 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":294,"completed":127,"skipped":1885,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:09:08.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 Jul 15 00:09:08.463: FAIL: expected ingressclasses, got []v1.APIResource{v1.APIResource{Name:"networkpolicies", SingularName:"", Namespaced:true, Group:"", Version:"", Kind:"NetworkPolicy", Verbs:v1.Verbs{"create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"}, ShortNames:[]string{"netpol"}, Categories:[]string(nil), StorageVersionHash:"YpfwF18m1G8="}} Expected : false to equal : true Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func15.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:210 +0x91e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x337 k8s.io/kubernetes/test/e2e.TestE2E(0xc000ef2240) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc000ef2240, 0x4cc3740) /usr/local/go/src/testing/testing.go:991 +0xdc created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1042 +0x357 [AfterEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "ingressclass-2582". STEP: Found 0 events. Jul 15 00:09:08.492: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:09:08.492: INFO: Jul 15 00:09:08.494: INFO: Logging node info for node latest-control-plane Jul 15 00:09:08.496: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane fab71f49-3955-4070-ba3f-a34ab7dbcb1f 1221155 0 2020-07-10 10:29:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:29:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-07-15 00:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-15 00:04:11 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-15 00:04:11 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-15 00:04:11 +0000 UTC,LastTransitionTime:2020-07-10 10:29:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-15 00:04:11 +0000 UTC,LastTransitionTime:2020-07-10 10:30:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08e3d1af94e64c419f74d6afa70f0d43,SystemUUID:b2b9a347-3d8a-409e-9c43-3d2f455385e1,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 15 00:09:08.496: INFO: Logging kubelet events for node latest-control-plane Jul 15 00:09:08.497: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jul 15 00:09:08.514: INFO: kube-controller-manager-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container kube-controller-manager ready: true, restart count 1 Jul 15 00:09:08.515: INFO: coredns-66bff467f8-xqch9 started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container coredns ready: true, restart count 0 Jul 15 00:09:08.515: INFO: local-path-provisioner-67795f75bd-wdgcp started at 2020-07-10 10:30:09 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container local-path-provisioner ready: true, restart count 0 Jul 15 00:09:08.515: INFO: kube-apiserver-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container kube-apiserver ready: true, restart count 0 Jul 15 00:09:08.515: INFO: kube-scheduler-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container kube-scheduler ready: true, restart count 1 Jul 15 00:09:08.515: INFO: kindnet-6gzv5 started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:09:08.515: INFO: kube-proxy-bvnbl started at 2020-07-10 10:29:53 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:09:08.515: INFO: coredns-66bff467f8-lkg9r started at 2020-07-10 10:30:12 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container coredns ready: true, restart count 0 Jul 15 00:09:08.515: INFO: etcd-latest-control-plane started at 2020-07-10 10:29:39 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.515: INFO: Container etcd ready: true, restart count 0 W0715 00:09:08.520277 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 00:09:08.586: INFO: Latency metrics for node latest-control-plane Jul 15 00:09:08.586: INFO: Logging node info for node latest-worker Jul 15 00:09:08.589: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker ee905599-6d86-471c-8264-80d61eb4d02f 1221734 0 2020-07-10 10:30:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-07-10 10:30:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2020-07-15 00:05:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-15 00:05:56 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-15 00:05:56 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-15 00:05:56 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-15 00:05:56 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:469a70212bc546bfb73ddea4d8686893,SystemUUID:ff574bf8-eaa0-484e-9d22-817c6038d2e3,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 15 00:09:08.590: INFO: Logging kubelet events for node latest-worker Jul 15 00:09:08.592: INFO: Logging pods the kubelet thinks is on node latest-worker Jul 15 00:09:08.597: INFO: kube-proxy-xb9q4 started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.597: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:09:08.597: INFO: kindnet-qt4jk started at 2020-07-10 10:30:16 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.597: INFO: Container kindnet-cni ready: true, restart count 0 W0715 00:09:08.602159 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 00:09:08.652: INFO: Latency metrics for node latest-worker Jul 15 00:09:08.652: INFO: Logging node info for node latest-worker2 Jul 15 00:09:08.657: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0ed4e844-533c-4115-b90e-6070300ff379 1221836 0 2020-07-10 10:30:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-07-10 10:30:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-07-10 10:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-07-15 00:06:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-15 00:06:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-15 00:06:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-15 00:06:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-15 00:06:24 +0000 UTC,LastTransitionTime:2020-07-10 10:30:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:58abb20e7a0b4d058f79f995dc3b2d92,SystemUUID:a7355a65-57ac-4117-ae3f-f79ca388e0d4,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-34-g49b0743c,KubeletVersion:v1.18.4,KubeProxyVersion:v1.18.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.4],SizeBytes:146649905,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.4],SizeBytes:133416062,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.4],SizeBytes:132840771,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200619-15f5b3ab],SizeBytes:120473968,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.4],SizeBytes:113093425,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 15 00:09:08.658: INFO: Logging kubelet events for node latest-worker2 Jul 15 00:09:08.664: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jul 15 00:09:08.669: INFO: svc-latency-rc-cq7nv started at 2020-07-15 00:08:50 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.669: INFO: Container svc-latency-rc ready: true, restart count 0 Jul 15 00:09:08.669: INFO: kube-proxy-s596l started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.669: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:09:08.669: INFO: kindnet-gkkxx started at 2020-07-10 10:30:17 +0000 UTC (0+1 container statuses recorded) Jul 15 00:09:08.669: INFO: Container kindnet-cni ready: true, restart count 0 W0715 00:09:08.674363 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 00:09:08.717: INFO: Latency metrics for node latest-worker2 Jul 15 00:09:08.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2582" for this suite. • Failure [0.325 seconds] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support creating IngressClass API operations [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:09:08.463: expected ingressclasses, got []v1.APIResource{v1.APIResource{Name:"networkpolicies", SingularName:"", Namespaced:true, Group:"", Version:"", Kind:"NetworkPolicy", Verbs:v1.Verbs{"create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"}, ShortNames:[]string{"netpol"}, Categories:[]string(nil), StorageVersionHash:"YpfwF18m1G8="}} Expected : false to equal : true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:210 ------------------------------ {"msg":"FAILED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":294,"completed":127,"skipped":1918,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:09:08.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-13862566-40a7-4ce7-ada5-77d476e3932d STEP: Creating a pod to test consume configMaps Jul 15 00:09:08.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14" in namespace "projected-7881" to be "Succeeded or Failed" Jul 15 00:09:08.912: INFO: Pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14": Phase="Pending", Reason="", readiness=false. Elapsed: 63.98073ms Jul 15 00:09:10.978: INFO: Pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129674183s Jul 15 00:09:12.982: INFO: Pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14": Phase="Running", Reason="", readiness=true. Elapsed: 4.134218307s Jul 15 00:09:14.986: INFO: Pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137803031s STEP: Saw pod success Jul 15 00:09:14.986: INFO: Pod "pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14" satisfied condition "Succeeded or Failed" Jul 15 00:09:15.003: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14 container projected-configmap-volume-test: STEP: delete the pod Jul 15 00:09:15.086: INFO: Waiting for pod pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14 to disappear Jul 15 00:09:15.106: INFO: Pod pod-projected-configmaps-c5ce7705-619b-45b8-9152-6af482758f14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:09:15.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7881" for this suite. • [SLOW TEST:6.405 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":128,"skipped":1925,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:09:15.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:09:15.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8458" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":294,"completed":129,"skipped":1946,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:09:15.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 00:09:23.824: INFO: DNS probes using dns-test-00b4fdbb-8435-4149-948e-a8000d2936cc succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 00:09:32.101: INFO: File wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:32.181: INFO: File jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:32.181: INFO: Lookups using dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 failed for: [wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local] Jul 15 00:09:37.191: INFO: File wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:37.214: INFO: File jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:37.215: INFO: Lookups using dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 failed for: [wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local] Jul 15 00:09:42.185: INFO: File wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:42.189: INFO: File jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:42.189: INFO: Lookups using dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 failed for: [wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local] Jul 15 00:09:47.186: INFO: File wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:47.190: INFO: File jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local from pod dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 15 00:09:47.190: INFO: Lookups using dns-2986/dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 failed for: [wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local] Jul 15 00:09:52.197: INFO: DNS probes using dns-test-c1a7465d-1964-4ecd-bcc0-9ef4cda0b148 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2986.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2986.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 00:10:00.847: INFO: DNS probes using dns-test-e931fd0c-71e1-4e86-981d-2e3047573e05 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:10:01.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2986" for this suite. • [SLOW TEST:45.486 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":294,"completed":130,"skipped":1964,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:10:01.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:10:07.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1406" for this suite. STEP: Destroying namespace "nsdeletetest-7828" for this suite. Jul 15 00:10:07.693: INFO: Namespace nsdeletetest-7828 was already deleted STEP: Destroying namespace "nsdeletetest-2520" for this suite. • [SLOW TEST:6.656 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":294,"completed":131,"skipped":1998,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:10:07.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 15 00:10:07.810: INFO: Waiting up to 5m0s for pod "pod-ad518156-76ec-4972-b37f-67d758e6e118" in namespace "emptydir-5008" to be "Succeeded or Failed" Jul 15 00:10:07.814: INFO: Pod "pod-ad518156-76ec-4972-b37f-67d758e6e118": Phase="Pending", Reason="", readiness=false. Elapsed: 3.78883ms Jul 15 00:10:09.818: INFO: Pod "pod-ad518156-76ec-4972-b37f-67d758e6e118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008296175s Jul 15 00:10:11.822: INFO: Pod "pod-ad518156-76ec-4972-b37f-67d758e6e118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012565343s STEP: Saw pod success Jul 15 00:10:11.822: INFO: Pod "pod-ad518156-76ec-4972-b37f-67d758e6e118" satisfied condition "Succeeded or Failed" Jul 15 00:10:11.825: INFO: Trying to get logs from node latest-worker pod pod-ad518156-76ec-4972-b37f-67d758e6e118 container test-container: STEP: delete the pod Jul 15 00:10:11.864: INFO: Waiting for pod pod-ad518156-76ec-4972-b37f-67d758e6e118 to disappear Jul 15 00:10:11.867: INFO: Pod pod-ad518156-76ec-4972-b37f-67d758e6e118 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:10:11.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5008" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":132,"skipped":2012,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:10:11.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 15 00:10:11.956: INFO: Waiting up to 1m0s for all nodes to be ready Jul 15 00:11:12.026: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jul 15 00:11:12.083: INFO: Created pod: pod0-sched-preemption-low-priority Jul 15 00:11:12.116: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:11:44.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3165" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:92.435 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":294,"completed":133,"skipped":2017,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:11:44.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 15 00:11:44.400: INFO: Waiting up to 5m0s for pod "pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760" in namespace "emptydir-5284" to be "Succeeded or Failed" Jul 15 00:11:44.408: INFO: Pod "pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214576ms Jul 15 00:11:46.412: INFO: Pod "pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011645792s Jul 15 00:11:48.416: INFO: Pod "pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015877053s STEP: Saw pod success Jul 15 00:11:48.416: INFO: Pod "pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760" satisfied condition "Succeeded or Failed" Jul 15 00:11:48.418: INFO: Trying to get logs from node latest-worker pod pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760 container test-container: STEP: delete the pod Jul 15 00:11:48.529: INFO: Waiting for pod pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760 to disappear Jul 15 00:11:48.576: INFO: Pod pod-f10e7fe4-0467-4a62-87e6-3b8d8e85e760 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:11:48.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5284" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":134,"skipped":2026,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:11:48.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jul 15 00:11:48.713: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix734105116/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:11:48.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5194" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":294,"completed":135,"skipped":2053,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:11:48.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:11:48.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1438" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":294,"completed":136,"skipped":2080,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:11:48.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:11:49.014: INFO: Create a RollingUpdate DaemonSet Jul 15 00:11:49.019: INFO: Check that daemon pods launch on every node of the cluster Jul 15 00:11:49.026: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:49.044: INFO: Number of nodes with available pods: 0 Jul 15 00:11:49.044: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:11:50.435: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:50.505: INFO: Number of nodes with available pods: 0 Jul 15 00:11:50.505: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:11:51.244: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:51.923: INFO: Number of nodes with available pods: 0 Jul 15 00:11:51.923: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:11:52.092: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:52.202: INFO: Number of nodes with available pods: 0 Jul 15 00:11:52.202: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:11:53.056: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:53.112: INFO: Number of nodes with available pods: 0 Jul 15 00:11:53.112: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:11:54.049: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:11:54.052: INFO: Number of nodes with available pods: 2 Jul 15 00:11:54.052: INFO: Number of running nodes: 2, number of available pods: 2 Jul 15 00:11:54.052: INFO: Update the DaemonSet to trigger a rollout Jul 15 00:11:54.061: INFO: Updating DaemonSet daemon-set Jul 15 00:12:10.102: INFO: Roll back the DaemonSet before rollout is complete Jul 15 00:12:10.111: INFO: Updating DaemonSet daemon-set Jul 15 00:12:10.111: INFO: Make sure DaemonSet rollback is complete Jul 15 00:12:10.128: INFO: Wrong image for pod: daemon-set-wqbsk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 15 00:12:10.128: INFO: Pod daemon-set-wqbsk is not available Jul 15 00:12:10.159: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:12:11.162: INFO: Wrong image for pod: daemon-set-wqbsk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 15 00:12:11.163: INFO: Pod daemon-set-wqbsk is not available Jul 15 00:12:11.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:12:12.189: INFO: Wrong image for pod: daemon-set-wqbsk. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 15 00:12:12.189: INFO: Pod daemon-set-wqbsk is not available Jul 15 00:12:12.192: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:12:13.163: INFO: Pod daemon-set-99cp5 is not available Jul 15 00:12:13.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4979, will wait for the garbage collector to delete the pods Jul 15 00:12:13.229: INFO: Deleting DaemonSet.extensions daemon-set took: 6.204965ms Jul 15 00:12:13.530: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.180689ms Jul 15 00:12:19.260: INFO: Number of nodes with available pods: 0 Jul 15 00:12:19.260: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 00:12:19.262: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4979/daemonsets","resourceVersion":"1225659"},"items":null} Jul 15 00:12:19.265: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4979/pods","resourceVersion":"1225659"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:12:19.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4979" for this suite. • [SLOW TEST:30.397 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":294,"completed":137,"skipped":2124,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:12:19.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jul 15 00:12:19.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config cluster-info' Jul 15 00:12:19.450: INFO: stderr: "" Jul 15 00:12:19.450: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39087\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39087/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:12:19.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6460" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":294,"completed":138,"skipped":2126,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:12:19.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-h5zj STEP: Creating a pod to test atomic-volume-subpath Jul 15 00:12:19.589: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-h5zj" in namespace "subpath-3553" to be "Succeeded or Failed" Jul 15 00:12:19.602: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.512162ms Jul 15 00:12:21.606: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017227474s Jul 15 00:12:23.610: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 4.021653969s Jul 15 00:12:25.615: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 6.0263904s Jul 15 00:12:27.620: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 8.030794944s Jul 15 00:12:29.624: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 10.035761183s Jul 15 00:12:31.629: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 12.040099647s Jul 15 00:12:33.633: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 14.044489132s Jul 15 00:12:35.637: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 16.048641301s Jul 15 00:12:37.641: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 18.052454275s Jul 15 00:12:39.646: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 20.056970545s Jul 15 00:12:41.649: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Running", Reason="", readiness=true. Elapsed: 22.060504741s Jul 15 00:12:43.654: INFO: Pod "pod-subpath-test-secret-h5zj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064885716s STEP: Saw pod success Jul 15 00:12:43.654: INFO: Pod "pod-subpath-test-secret-h5zj" satisfied condition "Succeeded or Failed" Jul 15 00:12:43.656: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-h5zj container test-container-subpath-secret-h5zj: STEP: delete the pod Jul 15 00:12:43.744: INFO: Waiting for pod pod-subpath-test-secret-h5zj to disappear Jul 15 00:12:43.908: INFO: Pod pod-subpath-test-secret-h5zj no longer exists STEP: Deleting pod pod-subpath-test-secret-h5zj Jul 15 00:12:43.908: INFO: Deleting pod "pod-subpath-test-secret-h5zj" in namespace "subpath-3553" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:12:43.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3553" for this suite. • [SLOW TEST:24.511 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":294,"completed":139,"skipped":2135,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:12:43.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:12:55.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4221" for this suite. • [SLOW TEST:11.605 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":294,"completed":140,"skipped":2165,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:12:55.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:12:55.665: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:12:59.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2587" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":294,"completed":141,"skipped":2175,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:12:59.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Jul 15 00:12:59.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f -' Jul 15 00:13:00.322: INFO: stderr: "" Jul 15 00:13:00.322: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jul 15 00:13:00.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config diff -f -' Jul 15 00:13:00.761: INFO: rc: 1 Jul 15 00:13:00.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete -f -' Jul 15 00:13:00.872: INFO: stderr: "" Jul 15 00:13:00.872: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:00.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5412" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":294,"completed":142,"skipped":2181,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:00.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:13:00.982: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-845f987e-0a56-4e8d-b3e1-52b05d61ba48" in namespace "security-context-test-5938" to be "Succeeded or Failed" Jul 15 00:13:00.993: INFO: Pod "alpine-nnp-false-845f987e-0a56-4e8d-b3e1-52b05d61ba48": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853958ms Jul 15 00:13:02.997: INFO: Pod "alpine-nnp-false-845f987e-0a56-4e8d-b3e1-52b05d61ba48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015287504s Jul 15 00:13:05.082: INFO: Pod "alpine-nnp-false-845f987e-0a56-4e8d-b3e1-52b05d61ba48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099648707s Jul 15 00:13:05.082: INFO: Pod "alpine-nnp-false-845f987e-0a56-4e8d-b3e1-52b05d61ba48" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:05.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5938" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":143,"skipped":2186,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:05.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-2144/secret-test-545e3db4-2424-44ed-853a-ff39d2b0669e STEP: Creating a pod to test consume secrets Jul 15 00:13:05.287: INFO: Waiting up to 5m0s for pod "pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d" in namespace "secrets-2144" to be "Succeeded or Failed" Jul 15 00:13:05.322: INFO: Pod "pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.813099ms Jul 15 00:13:07.351: INFO: Pod "pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063987496s Jul 15 00:13:09.361: INFO: Pod "pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073570245s STEP: Saw pod success Jul 15 00:13:09.361: INFO: Pod "pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d" satisfied condition "Succeeded or Failed" Jul 15 00:13:09.364: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d container env-test: STEP: delete the pod Jul 15 00:13:09.389: INFO: Waiting for pod pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d to disappear Jul 15 00:13:09.393: INFO: Pod pod-configmaps-7761d628-2b22-41f1-abed-d3b9064cc16d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:09.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2144" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":144,"skipped":2209,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:09.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 15 00:13:09.506: INFO: Waiting up to 5m0s for pod "pod-1e559426-f584-4cb9-8701-c4082be393c9" in namespace "emptydir-3071" to be "Succeeded or Failed" Jul 15 00:13:09.513: INFO: Pod "pod-1e559426-f584-4cb9-8701-c4082be393c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.975199ms Jul 15 00:13:11.518: INFO: Pod "pod-1e559426-f584-4cb9-8701-c4082be393c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011265394s Jul 15 00:13:13.523: INFO: Pod "pod-1e559426-f584-4cb9-8701-c4082be393c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016345066s STEP: Saw pod success Jul 15 00:13:13.523: INFO: Pod "pod-1e559426-f584-4cb9-8701-c4082be393c9" satisfied condition "Succeeded or Failed" Jul 15 00:13:13.526: INFO: Trying to get logs from node latest-worker pod pod-1e559426-f584-4cb9-8701-c4082be393c9 container test-container: STEP: delete the pod Jul 15 00:13:13.544: INFO: Waiting for pod pod-1e559426-f584-4cb9-8701-c4082be393c9 to disappear Jul 15 00:13:13.564: INFO: Pod pod-1e559426-f584-4cb9-8701-c4082be393c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:13.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3071" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":145,"skipped":2214,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:13.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2912" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":294,"completed":146,"skipped":2227,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:17.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-9c777703-147d-4b48-8ed9-1677457a425f STEP: Creating configMap with name cm-test-opt-upd-a27a578d-dd87-4747-957f-e674f22380ff STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9c777703-147d-4b48-8ed9-1677457a425f STEP: Updating configmap cm-test-opt-upd-a27a578d-dd87-4747-957f-e674f22380ff STEP: Creating configMap with name cm-test-opt-create-431b0c97-9707-4fac-a0be-3f0d2fc86365 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:26.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4587" for this suite. • [SLOW TEST:8.522 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":147,"skipped":2254,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:26.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:307 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 15 00:13:26.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1449' Jul 15 00:13:26.897: INFO: stderr: "" Jul 15 00:13:26.897: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 15 00:13:26.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1449' Jul 15 00:13:27.031: INFO: stderr: "" Jul 15 00:13:27.032: INFO: stdout: "update-demo-nautilus-vr57x update-demo-nautilus-wt6wk " Jul 15 00:13:27.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vr57x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1449' Jul 15 00:13:27.163: INFO: stderr: "" Jul 15 00:13:27.163: INFO: stdout: "" Jul 15 00:13:27.163: INFO: update-demo-nautilus-vr57x is created but not running Jul 15 00:13:32.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1449' Jul 15 00:13:32.348: INFO: stderr: "" Jul 15 00:13:32.348: INFO: stdout: "update-demo-nautilus-vr57x update-demo-nautilus-wt6wk " Jul 15 00:13:32.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vr57x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1449' Jul 15 00:13:32.747: INFO: stderr: "" Jul 15 00:13:32.747: INFO: stdout: "true" Jul 15 00:13:32.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vr57x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1449' Jul 15 00:13:32.846: INFO: stderr: "" Jul 15 00:13:32.847: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 00:13:32.847: INFO: validating pod update-demo-nautilus-vr57x Jul 15 00:13:32.851: INFO: got data: { "image": "nautilus.jpg" } Jul 15 00:13:32.851: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 00:13:32.851: INFO: update-demo-nautilus-vr57x is verified up and running Jul 15 00:13:32.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wt6wk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1449' Jul 15 00:13:32.997: INFO: stderr: "" Jul 15 00:13:32.997: INFO: stdout: "true" Jul 15 00:13:32.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wt6wk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1449' Jul 15 00:13:33.197: INFO: stderr: "" Jul 15 00:13:33.198: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 15 00:13:33.198: INFO: validating pod update-demo-nautilus-wt6wk Jul 15 00:13:33.222: INFO: got data: { "image": "nautilus.jpg" } Jul 15 00:13:33.223: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 15 00:13:33.223: INFO: update-demo-nautilus-wt6wk is verified up and running STEP: using delete to clean up resources Jul 15 00:13:33.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1449' Jul 15 00:13:33.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:13:33.351: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 15 00:13:33.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1449' Jul 15 00:13:33.466: INFO: stderr: "No resources found in kubectl-1449 namespace.\n" Jul 15 00:13:33.467: INFO: stdout: "" Jul 15 00:13:33.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1449 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 00:13:33.603: INFO: stderr: "" Jul 15 00:13:33.603: INFO: stdout: "update-demo-nautilus-vr57x\nupdate-demo-nautilus-wt6wk\n" Jul 15 00:13:34.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1449' Jul 15 00:13:34.261: INFO: stderr: "No resources found in kubectl-1449 namespace.\n" Jul 15 00:13:34.261: INFO: stdout: "" Jul 15 00:13:34.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1449 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 00:13:34.489: INFO: stderr: "" Jul 15 00:13:34.489: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:34.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1449" for this suite. • [SLOW TEST:7.988 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:305 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":294,"completed":148,"skipped":2268,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:34.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jul 15 00:13:34.871: INFO: Waiting up to 5m0s for pod "client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4" in namespace "containers-7898" to be "Succeeded or Failed" Jul 15 00:13:34.932: INFO: Pod "client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 60.443124ms Jul 15 00:13:36.936: INFO: Pod "client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06473216s Jul 15 00:13:38.945: INFO: Pod "client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07345133s STEP: Saw pod success Jul 15 00:13:38.945: INFO: Pod "client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4" satisfied condition "Succeeded or Failed" Jul 15 00:13:38.946: INFO: Trying to get logs from node latest-worker pod client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4 container test-container: STEP: delete the pod Jul 15 00:13:38.993: INFO: Waiting for pod client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4 to disappear Jul 15 00:13:38.999: INFO: Pod client-containers-ba46d070-d157-402f-9c62-9c3ee64fc2b4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:38.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7898" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":294,"completed":149,"skipped":2276,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:39.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 15 00:13:43.599: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2f6098e9-985a-410c-a185-443f45129299" Jul 15 00:13:43.600: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2f6098e9-985a-410c-a185-443f45129299" in namespace "pods-4537" to be "terminated due to deadline exceeded" Jul 15 00:13:43.705: INFO: Pod "pod-update-activedeadlineseconds-2f6098e9-985a-410c-a185-443f45129299": Phase="Running", Reason="", readiness=true. Elapsed: 105.65437ms Jul 15 00:13:45.709: INFO: Pod "pod-update-activedeadlineseconds-2f6098e9-985a-410c-a185-443f45129299": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.10938519s Jul 15 00:13:45.709: INFO: Pod "pod-update-activedeadlineseconds-2f6098e9-985a-410c-a185-443f45129299" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:45.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4537" for this suite. • [SLOW TEST:6.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":294,"completed":150,"skipped":2308,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:45.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-04f4b2f7-1964-4475-8678-f334674d104d STEP: Creating secret with name secret-projected-all-test-volume-6f39cb80-70d9-43f2-8710-b9b13f73d23c STEP: Creating a pod to test Check all projections for projected volume plugin Jul 15 00:13:45.841: INFO: Waiting up to 5m0s for pod "projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24" in namespace "projected-5758" to be "Succeeded or Failed" Jul 15 00:13:45.874: INFO: Pod "projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24": Phase="Pending", Reason="", readiness=false. Elapsed: 33.729152ms Jul 15 00:13:47.903: INFO: Pod "projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062218099s Jul 15 00:13:49.906: INFO: Pod "projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065471045s STEP: Saw pod success Jul 15 00:13:49.906: INFO: Pod "projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24" satisfied condition "Succeeded or Failed" Jul 15 00:13:49.909: INFO: Trying to get logs from node latest-worker pod projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24 container projected-all-volume-test: STEP: delete the pod Jul 15 00:13:49.982: INFO: Waiting for pod projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24 to disappear Jul 15 00:13:49.999: INFO: Pod projected-volume-0f824d5a-ad31-4e14-b77e-bee3f7482d24 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:13:50.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5758" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":294,"completed":151,"skipped":2308,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:13:50.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f9631dac-ba93-4b85-82e7-e8c6d1540538 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f9631dac-ba93-4b85-82e7-e8c6d1540538 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:10.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9525" for this suite. • [SLOW TEST:80.557 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":152,"skipped":2321,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:10.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 15 00:15:10.982: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 15 00:15:12.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368911, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368911, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368911, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730368910, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-869fb7d886\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:15:16.880: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:15:16.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:18.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-622" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.642 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":294,"completed":153,"skipped":2327,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:18.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:15:18.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6" in namespace "projected-9889" to be "Succeeded or Failed" Jul 15 00:15:18.326: INFO: Pod "downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022323ms Jul 15 00:15:20.330: INFO: Pod "downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010221549s Jul 15 00:15:22.332: INFO: Pod "downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012604716s STEP: Saw pod success Jul 15 00:15:22.332: INFO: Pod "downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6" satisfied condition "Succeeded or Failed" Jul 15 00:15:22.334: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6 container client-container: STEP: delete the pod Jul 15 00:15:22.618: INFO: Waiting for pod downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6 to disappear Jul 15 00:15:22.630: INFO: Pod downwardapi-volume-8dd676da-7cb7-493f-9eaf-33d41dc88fd6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:22.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9889" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":154,"skipped":2336,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:22.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jul 15 00:15:22.702: INFO: Waiting up to 5m0s for pod "var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6" in namespace "var-expansion-8137" to be "Succeeded or Failed" Jul 15 00:15:22.704: INFO: Pod "var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.878885ms Jul 15 00:15:24.706: INFO: Pod "var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004663261s Jul 15 00:15:26.711: INFO: Pod "var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00922064s STEP: Saw pod success Jul 15 00:15:26.711: INFO: Pod "var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6" satisfied condition "Succeeded or Failed" Jul 15 00:15:26.714: INFO: Trying to get logs from node latest-worker2 pod var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6 container dapi-container: STEP: delete the pod Jul 15 00:15:26.735: INFO: Waiting for pod var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6 to disappear Jul 15 00:15:26.754: INFO: Pod var-expansion-43110a28-8d85-45d6-b3bc-d5a078691fe6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:26.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8137" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":294,"completed":155,"skipped":2355,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:26.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[] Jul 15 00:15:28.233: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod1:[80]] Jul 15 00:15:33.992: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod1:[80] pod2:[80]] Jul 15 00:15:37.143: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[pod2:[80]] Jul 15 00:15:37.185: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3938 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3938 to expose endpoints map[] Jul 15 00:15:38.207: INFO: successfully validated that service endpoint-test2 in namespace services-3938 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:38.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3938" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:11.613 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":294,"completed":156,"skipped":2396,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:38.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:15:42.546: INFO: Waiting up to 5m0s for pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd" in namespace "pods-7730" to be "Succeeded or Failed" Jul 15 00:15:42.566: INFO: Pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116698ms Jul 15 00:15:45.006: INFO: Pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460697024s Jul 15 00:15:47.011: INFO: Pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.465420554s Jul 15 00:15:49.024: INFO: Pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.478829049s STEP: Saw pod success Jul 15 00:15:49.024: INFO: Pod "client-envvars-26150f97-d258-4213-a300-3b60ce2745dd" satisfied condition "Succeeded or Failed" Jul 15 00:15:49.027: INFO: Trying to get logs from node latest-worker2 pod client-envvars-26150f97-d258-4213-a300-3b60ce2745dd container env3cont: STEP: delete the pod Jul 15 00:15:49.085: INFO: Waiting for pod client-envvars-26150f97-d258-4213-a300-3b60ce2745dd to disappear Jul 15 00:15:49.112: INFO: Pod client-envvars-26150f97-d258-4213-a300-3b60ce2745dd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7730" for this suite. • [SLOW TEST:10.804 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":294,"completed":157,"skipped":2410,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:49.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:15:56.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7249" for this suite. • [SLOW TEST:7.167 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":294,"completed":158,"skipped":2428,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:15:56.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-132089ce-f64e-4a63-a8a0-343f1e1944f7 STEP: Creating a pod to test consume configMaps Jul 15 00:15:56.434: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e" in namespace "configmap-5599" to be "Succeeded or Failed" Jul 15 00:15:56.441: INFO: Pod "pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95633ms Jul 15 00:15:58.474: INFO: Pod "pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040071609s Jul 15 00:16:00.478: INFO: Pod "pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044341551s STEP: Saw pod success Jul 15 00:16:00.478: INFO: Pod "pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e" satisfied condition "Succeeded or Failed" Jul 15 00:16:00.481: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e container configmap-volume-test: STEP: delete the pod Jul 15 00:16:00.502: INFO: Waiting for pod pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e to disappear Jul 15 00:16:00.506: INFO: Pod pod-configmaps-bd7d1551-a05b-4375-a47b-311e5f750e4e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:16:00.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5599" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":159,"skipped":2467,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:16:00.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:16:00.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16" in namespace "downward-api-175" to be "Succeeded or Failed" Jul 15 00:16:00.638: INFO: Pod "downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 19.487378ms Jul 15 00:16:02.671: INFO: Pod "downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052541427s Jul 15 00:16:04.675: INFO: Pod "downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056606479s STEP: Saw pod success Jul 15 00:16:04.675: INFO: Pod "downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16" satisfied condition "Succeeded or Failed" Jul 15 00:16:04.678: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16 container client-container: STEP: delete the pod Jul 15 00:16:04.770: INFO: Waiting for pod downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16 to disappear Jul 15 00:16:04.893: INFO: Pod downwardapi-volume-824d865b-58fa-4541-a301-9b186d05ba16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:16:04.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-175" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":160,"skipped":2479,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:16:04.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:16:05.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8962" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":294,"completed":161,"skipped":2493,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:16:05.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7510 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7510 STEP: creating replication controller externalsvc in namespace services-7510 I0715 00:16:05.469299 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7510, replica count: 2 I0715 00:16:08.519751 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:16:11.519995 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 15 00:16:11.593: INFO: Creating new exec pod Jul 15 00:16:15.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-7510 execpod4dg6x -- /bin/sh -x -c nslookup clusterip-service.services-7510.svc.cluster.local' Jul 15 00:16:16.049: INFO: stderr: "I0715 00:16:15.754401 1541 log.go:181] (0xc000eacfd0) (0xc000969b80) Create stream\nI0715 00:16:15.754466 1541 log.go:181] (0xc000eacfd0) (0xc000969b80) Stream added, broadcasting: 1\nI0715 00:16:15.760327 1541 log.go:181] (0xc000eacfd0) Reply frame received for 1\nI0715 00:16:15.760374 1541 log.go:181] (0xc000eacfd0) (0xc0008aa960) Create stream\nI0715 00:16:15.760391 1541 log.go:181] (0xc000eacfd0) (0xc0008aa960) Stream added, broadcasting: 3\nI0715 00:16:15.761481 1541 log.go:181] (0xc000eacfd0) Reply frame received for 3\nI0715 00:16:15.761521 1541 log.go:181] (0xc000eacfd0) (0xc0000c9f40) Create stream\nI0715 00:16:15.761532 1541 log.go:181] (0xc000eacfd0) (0xc0000c9f40) Stream added, broadcasting: 5\nI0715 00:16:15.762562 1541 log.go:181] (0xc000eacfd0) Reply frame received for 5\nI0715 00:16:15.821730 1541 log.go:181] (0xc000eacfd0) Data frame received for 5\nI0715 00:16:15.821775 1541 log.go:181] (0xc0000c9f40) (5) Data frame handling\nI0715 00:16:15.821797 1541 log.go:181] (0xc0000c9f40) (5) Data frame sent\n+ nslookup clusterip-service.services-7510.svc.cluster.local\nI0715 00:16:16.042692 1541 log.go:181] (0xc000eacfd0) Data frame received for 3\nI0715 00:16:16.042855 1541 log.go:181] (0xc0008aa960) (3) Data frame handling\nI0715 00:16:16.042970 1541 log.go:181] (0xc0008aa960) (3) Data frame sent\nI0715 00:16:16.043165 1541 log.go:181] (0xc000eacfd0) Data frame received for 3\nI0715 00:16:16.043276 1541 log.go:181] (0xc0008aa960) (3) Data frame handling\nI0715 00:16:16.043398 1541 log.go:181] (0xc000eacfd0) Data frame received for 5\nI0715 00:16:16.043425 1541 log.go:181] (0xc0000c9f40) (5) Data frame handling\nI0715 00:16:16.044545 1541 log.go:181] (0xc000eacfd0) Data frame received for 1\nI0715 00:16:16.044577 1541 log.go:181] (0xc000969b80) (1) Data frame handling\nI0715 00:16:16.044595 1541 log.go:181] (0xc000969b80) (1) Data frame sent\nI0715 00:16:16.044622 1541 log.go:181] (0xc000eacfd0) (0xc000969b80) Stream removed, broadcasting: 1\nI0715 00:16:16.045190 1541 log.go:181] (0xc000eacfd0) (0xc000969b80) Stream removed, broadcasting: 1\nI0715 00:16:16.045209 1541 log.go:181] (0xc000eacfd0) (0xc0008aa960) Stream removed, broadcasting: 3\nI0715 00:16:16.045217 1541 log.go:181] (0xc000eacfd0) (0xc0000c9f40) Stream removed, broadcasting: 5\n" Jul 15 00:16:16.049: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7510.svc.cluster.local\tcanonical name = externalsvc.services-7510.svc.cluster.local.\nName:\texternalsvc.services-7510.svc.cluster.local\nAddress: 10.108.252.73\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7510, will wait for the garbage collector to delete the pods Jul 15 00:16:16.373: INFO: Deleting ReplicationController externalsvc took: 256.287424ms Jul 15 00:16:16.773: INFO: Terminating ReplicationController externalsvc pods took: 400.52401ms Jul 15 00:16:21.917: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:16:21.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:16.889 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":294,"completed":162,"skipped":2494,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:16:21.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:16:22.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config version' Jul 15 00:16:22.216: INFO: stderr: "" Jul 15 00:16:22.216: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.0.4+2d327ac4558d78\", GitCommit:\"2d327ac4558d78c744004db178dacb80bd6e0b9e\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T11:25:25Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:16:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1144" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":294,"completed":163,"skipped":2503,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:16:22.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4417 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4417;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4417 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4417;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4417.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4417.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4417.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4417.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.23.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.23.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.23.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.23.197_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4417 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4417;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4417 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4417;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4417.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4417.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4417.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4417.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4417.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4417.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4417.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.23.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.23.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.23.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.23.197_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 00:16:31.784: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.788: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.792: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.795: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.806: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.809: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.832: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.835: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.838: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.841: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.844: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.850: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.852: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:31.872: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:16:36.877: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.881: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.920: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.923: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.926: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.931: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.933: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.939: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:36.955: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:16:41.878: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.882: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.900: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.903: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.926: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.934: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.938: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.943: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.946: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:41.970: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:16:46.877: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.880: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.887: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.917: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.919: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.922: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.926: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.929: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.933: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:46.949: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:16:51.877: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.881: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.922: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.925: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.928: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.935: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.938: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.942: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.945: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:51.963: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:16:56.878: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.882: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.889: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.917: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.920: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.923: INFO: Unable to read jessie_udp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417 from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.929: INFO: Unable to read jessie_udp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.935: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.939: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc from pod dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02: the server could not find the requested resource (get pods dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02) Jul 15 00:16:56.958: INFO: Lookups using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4417 wheezy_tcp@dns-test-service.dns-4417 wheezy_udp@dns-test-service.dns-4417.svc wheezy_tcp@dns-test-service.dns-4417.svc wheezy_udp@_http._tcp.dns-test-service.dns-4417.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4417.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4417 jessie_tcp@dns-test-service.dns-4417 jessie_udp@dns-test-service.dns-4417.svc jessie_tcp@dns-test-service.dns-4417.svc jessie_udp@_http._tcp.dns-test-service.dns-4417.svc jessie_tcp@_http._tcp.dns-test-service.dns-4417.svc] Jul 15 00:17:01.969: INFO: DNS probes using dns-4417/dns-test-cbe5087e-ffea-4123-9d79-f9a88c5e8c02 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:17:02.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4417" for this suite. • [SLOW TEST:40.636 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":294,"completed":164,"skipped":2551,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:17:02.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1328 STEP: creating the pod Jul 15 00:17:02.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6865' Jul 15 00:17:03.310: INFO: stderr: "" Jul 15 00:17:03.310: INFO: stdout: "pod/pause created\n" Jul 15 00:17:03.310: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 15 00:17:03.310: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6865" to be "running and ready" Jul 15 00:17:03.333: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.623806ms Jul 15 00:17:05.337: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026647031s Jul 15 00:17:07.341: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030764833s Jul 15 00:17:09.391: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.080913022s Jul 15 00:17:09.391: INFO: Pod "pause" satisfied condition "running and ready" Jul 15 00:17:09.391: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jul 15 00:17:09.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6865' Jul 15 00:17:09.529: INFO: stderr: "" Jul 15 00:17:09.529: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 15 00:17:09.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6865' Jul 15 00:17:09.656: INFO: stderr: "" Jul 15 00:17:09.656: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 15 00:17:09.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6865' Jul 15 00:17:09.797: INFO: stderr: "" Jul 15 00:17:09.797: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 15 00:17:09.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6865' Jul 15 00:17:09.924: INFO: stderr: "" Jul 15 00:17:09.924: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1335 STEP: using delete to clean up resources Jul 15 00:17:09.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6865' Jul 15 00:17:10.058: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:17:10.058: INFO: stdout: "pod \"pause\" force deleted\n" Jul 15 00:17:10.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6865' Jul 15 00:17:10.468: INFO: stderr: "No resources found in kubectl-6865 namespace.\n" Jul 15 00:17:10.468: INFO: stdout: "" Jul 15 00:17:10.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6865 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 15 00:17:10.558: INFO: stderr: "" Jul 15 00:17:10.559: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:17:10.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6865" for this suite. • [SLOW TEST:7.708 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1325 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":294,"completed":165,"skipped":2551,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:17:10.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 15 00:17:15.496: INFO: Successfully updated pod "labelsupdated76b81dd-c315-4ce8-8a1e-012fb63a35a5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:17:17.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-459" for this suite. • [SLOW TEST:6.957 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":166,"skipped":2561,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:17:17.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-858 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-858 Jul 15 00:17:17.628: INFO: Found 0 stateful pods, waiting for 1 Jul 15 00:17:27.633: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 15 00:17:27.661: INFO: Deleting all statefulset in ns statefulset-858 Jul 15 00:17:27.674: INFO: Scaling statefulset ss to 0 Jul 15 00:17:47.729: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:17:47.756: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:17:47.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-858" for this suite. • [SLOW TEST:30.241 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":294,"completed":167,"skipped":2620,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:17:47.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 15 00:17:47.828: INFO: Waiting up to 5m0s for pod "pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0" in namespace "emptydir-4033" to be "Succeeded or Failed" Jul 15 00:17:47.839: INFO: Pod "pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882611ms Jul 15 00:17:49.844: INFO: Pod "pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014969567s Jul 15 00:17:51.848: INFO: Pod "pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019183955s STEP: Saw pod success Jul 15 00:17:51.848: INFO: Pod "pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0" satisfied condition "Succeeded or Failed" Jul 15 00:17:51.851: INFO: Trying to get logs from node latest-worker pod pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0 container test-container: STEP: delete the pod Jul 15 00:17:51.897: INFO: Waiting for pod pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0 to disappear Jul 15 00:17:51.905: INFO: Pod pod-285b7a5d-bff1-4226-bfdb-6b52f73632a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:17:51.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4033" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":168,"skipped":2636,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:17:51.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8477 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 00:17:52.006: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 15 00:17:52.098: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:17:54.146: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:17:56.102: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:17:58.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:00.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:02.103: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:04.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:06.103: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:08.104: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:10.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:12.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:18:14.102: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 15 00:18:14.107: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 15 00:18:18.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.171:8080/dial?request=hostname&protocol=udp&host=10.244.2.51&port=8081&tries=1'] Namespace:pod-network-test-8477 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:18:18.134: INFO: >>> kubeConfig: /root/.kube/config I0715 00:18:18.171191 7 log.go:181] (0xc000f4e420) (0xc0025970e0) Create stream I0715 00:18:18.171222 7 log.go:181] (0xc000f4e420) (0xc0025970e0) Stream added, broadcasting: 1 I0715 00:18:18.173443 7 log.go:181] (0xc000f4e420) Reply frame received for 1 I0715 00:18:18.173484 7 log.go:181] (0xc000f4e420) (0xc001f20000) Create stream I0715 00:18:18.173499 7 log.go:181] (0xc000f4e420) (0xc001f20000) Stream added, broadcasting: 3 I0715 00:18:18.174621 7 log.go:181] (0xc000f4e420) Reply frame received for 3 I0715 00:18:18.174674 7 log.go:181] (0xc000f4e420) (0xc001a01680) Create stream I0715 00:18:18.174691 7 log.go:181] (0xc000f4e420) (0xc001a01680) Stream added, broadcasting: 5 I0715 00:18:18.175666 7 log.go:181] (0xc000f4e420) Reply frame received for 5 I0715 00:18:18.235199 7 log.go:181] (0xc000f4e420) Data frame received for 3 I0715 00:18:18.235228 7 log.go:181] (0xc001f20000) (3) Data frame handling I0715 00:18:18.235247 7 log.go:181] (0xc001f20000) (3) Data frame sent I0715 00:18:18.235766 7 log.go:181] (0xc000f4e420) Data frame received for 5 I0715 00:18:18.235799 7 log.go:181] (0xc001a01680) (5) Data frame handling I0715 00:18:18.236075 7 log.go:181] (0xc000f4e420) Data frame received for 3 I0715 00:18:18.236097 7 log.go:181] (0xc001f20000) (3) Data frame handling I0715 00:18:18.237855 7 log.go:181] (0xc000f4e420) Data frame received for 1 I0715 00:18:18.237878 7 log.go:181] (0xc0025970e0) (1) Data frame handling I0715 00:18:18.237889 7 log.go:181] (0xc0025970e0) (1) Data frame sent I0715 00:18:18.237900 7 log.go:181] (0xc000f4e420) (0xc0025970e0) Stream removed, broadcasting: 1 I0715 00:18:18.237987 7 log.go:181] (0xc000f4e420) (0xc0025970e0) Stream removed, broadcasting: 1 I0715 00:18:18.237998 7 log.go:181] (0xc000f4e420) (0xc001f20000) Stream removed, broadcasting: 3 I0715 00:18:18.238040 7 log.go:181] (0xc000f4e420) Go away received I0715 00:18:18.238080 7 log.go:181] (0xc000f4e420) (0xc001a01680) Stream removed, broadcasting: 5 Jul 15 00:18:18.238: INFO: Waiting for responses: map[] Jul 15 00:18:18.241: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.171:8080/dial?request=hostname&protocol=udp&host=10.244.1.170&port=8081&tries=1'] Namespace:pod-network-test-8477 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:18:18.241: INFO: >>> kubeConfig: /root/.kube/config I0715 00:18:18.273095 7 log.go:181] (0xc00206c370) (0xc001f206e0) Create stream I0715 00:18:18.273120 7 log.go:181] (0xc00206c370) (0xc001f206e0) Stream added, broadcasting: 1 I0715 00:18:18.275534 7 log.go:181] (0xc00206c370) Reply frame received for 1 I0715 00:18:18.275558 7 log.go:181] (0xc00206c370) (0xc00320ce60) Create stream I0715 00:18:18.275568 7 log.go:181] (0xc00206c370) (0xc00320ce60) Stream added, broadcasting: 3 I0715 00:18:18.276857 7 log.go:181] (0xc00206c370) Reply frame received for 3 I0715 00:18:18.276937 7 log.go:181] (0xc00206c370) (0xc002597180) Create stream I0715 00:18:18.276960 7 log.go:181] (0xc00206c370) (0xc002597180) Stream added, broadcasting: 5 I0715 00:18:18.277996 7 log.go:181] (0xc00206c370) Reply frame received for 5 I0715 00:18:18.337796 7 log.go:181] (0xc00206c370) Data frame received for 3 I0715 00:18:18.337844 7 log.go:181] (0xc00320ce60) (3) Data frame handling I0715 00:18:18.337889 7 log.go:181] (0xc00320ce60) (3) Data frame sent I0715 00:18:18.338236 7 log.go:181] (0xc00206c370) Data frame received for 5 I0715 00:18:18.338279 7 log.go:181] (0xc002597180) (5) Data frame handling I0715 00:18:18.338328 7 log.go:181] (0xc00206c370) Data frame received for 3 I0715 00:18:18.338351 7 log.go:181] (0xc00320ce60) (3) Data frame handling I0715 00:18:18.340055 7 log.go:181] (0xc00206c370) Data frame received for 1 I0715 00:18:18.340074 7 log.go:181] (0xc001f206e0) (1) Data frame handling I0715 00:18:18.340097 7 log.go:181] (0xc001f206e0) (1) Data frame sent I0715 00:18:18.340126 7 log.go:181] (0xc00206c370) (0xc001f206e0) Stream removed, broadcasting: 1 I0715 00:18:18.340235 7 log.go:181] (0xc00206c370) Go away received I0715 00:18:18.340263 7 log.go:181] (0xc00206c370) (0xc001f206e0) Stream removed, broadcasting: 1 I0715 00:18:18.340311 7 log.go:181] (0xc00206c370) (0xc00320ce60) Stream removed, broadcasting: 3 I0715 00:18:18.340340 7 log.go:181] (0xc00206c370) (0xc002597180) Stream removed, broadcasting: 5 Jul 15 00:18:18.340: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:18:18.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8477" for this suite. • [SLOW TEST:26.436 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":294,"completed":169,"skipped":2642,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:18:18.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 15 00:18:18.397: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:18:24.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2379" for this suite. • [SLOW TEST:6.482 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":294,"completed":170,"skipped":2643,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:18:24.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-990548d0-3f9b-4bde-85df-2db05cf46582 in namespace container-probe-8613 Jul 15 00:18:29.793: INFO: Started pod busybox-990548d0-3f9b-4bde-85df-2db05cf46582 in namespace container-probe-8613 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 00:18:29.796: INFO: Initial restart count of pod busybox-990548d0-3f9b-4bde-85df-2db05cf46582 is 0 Jul 15 00:19:25.920: INFO: Restart count of pod container-probe-8613/busybox-990548d0-3f9b-4bde-85df-2db05cf46582 is now 1 (56.124288656s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:19:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8613" for this suite. • [SLOW TEST:61.136 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":171,"skipped":2671,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:19:25.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:19:26.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6" in namespace "projected-3917" to be "Succeeded or Failed" Jul 15 00:19:26.063: INFO: Pod "downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.705246ms Jul 15 00:19:28.067: INFO: Pod "downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015331143s Jul 15 00:19:30.071: INFO: Pod "downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019415027s STEP: Saw pod success Jul 15 00:19:30.071: INFO: Pod "downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6" satisfied condition "Succeeded or Failed" Jul 15 00:19:30.074: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6 container client-container: STEP: delete the pod Jul 15 00:19:30.239: INFO: Waiting for pod downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6 to disappear Jul 15 00:19:30.243: INFO: Pod downwardapi-volume-c0957e23-1695-4135-a53a-8f28916783f6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:19:30.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3917" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":172,"skipped":2726,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:19:30.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:19:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3837" for this suite. • [SLOW TEST:13.243 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":294,"completed":173,"skipped":2745,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:19:43.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-76f0275e-f585-469b-9e82-14e6d777534f STEP: Creating a pod to test consume secrets Jul 15 00:19:43.572: INFO: Waiting up to 5m0s for pod "pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09" in namespace "secrets-2613" to be "Succeeded or Failed" Jul 15 00:19:43.576: INFO: Pod "pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.530954ms Jul 15 00:19:45.580: INFO: Pod "pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505365s Jul 15 00:19:47.584: INFO: Pod "pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011409221s STEP: Saw pod success Jul 15 00:19:47.584: INFO: Pod "pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09" satisfied condition "Succeeded or Failed" Jul 15 00:19:47.587: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09 container secret-volume-test: STEP: delete the pod Jul 15 00:19:47.619: INFO: Waiting for pod pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09 to disappear Jul 15 00:19:47.656: INFO: Pod pod-secrets-889989da-98dc-46f3-8d0a-2a6bf1c5eb09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:19:47.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2613" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":174,"skipped":2766,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:19:47.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 15 00:19:47.805: INFO: >>> kubeConfig: /root/.kube/config Jul 15 00:19:50.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:20:00.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3065" for this suite. • [SLOW TEST:12.554 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":294,"completed":175,"skipped":2792,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:20:00.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 in namespace container-probe-3661 Jul 15 00:20:04.408: INFO: Started pod liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 in namespace container-probe-3661 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 00:20:04.411: INFO: Initial restart count of pod liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is 0 Jul 15 00:20:22.451: INFO: Restart count of pod container-probe-3661/liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is now 1 (18.039676747s elapsed) Jul 15 00:20:42.601: INFO: Restart count of pod container-probe-3661/liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is now 2 (38.189985442s elapsed) Jul 15 00:21:02.671: INFO: Restart count of pod container-probe-3661/liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is now 3 (58.259693546s elapsed) Jul 15 00:21:22.886: INFO: Restart count of pod container-probe-3661/liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is now 4 (1m18.475392275s elapsed) Jul 15 00:22:25.446: INFO: Restart count of pod container-probe-3661/liveness-692b9056-f5ad-4bab-a1d1-bc5a8e145784 is now 5 (2m21.035054485s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:22:25.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3661" for this suite. • [SLOW TEST:145.279 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":294,"completed":176,"skipped":2795,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:22:25.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-6807a7c1-6862-4572-8c54-acc928a18db6 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:22:25.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1583" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":294,"completed":177,"skipped":2818,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:22:25.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1078 Jul 15 00:22:30.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 15 00:22:33.232: INFO: stderr: "I0715 00:22:33.153574 1720 log.go:181] (0xc0005aeb00) (0xc000b2f720) Create stream\nI0715 00:22:33.153628 1720 log.go:181] (0xc0005aeb00) (0xc000b2f720) Stream added, broadcasting: 1\nI0715 00:22:33.157473 1720 log.go:181] (0xc0005aeb00) Reply frame received for 1\nI0715 00:22:33.157518 1720 log.go:181] (0xc0005aeb00) (0xc000b2f7c0) Create stream\nI0715 00:22:33.157526 1720 log.go:181] (0xc0005aeb00) (0xc000b2f7c0) Stream added, broadcasting: 3\nI0715 00:22:33.158799 1720 log.go:181] (0xc0005aeb00) Reply frame received for 3\nI0715 00:22:33.158833 1720 log.go:181] (0xc0005aeb00) (0xc000a56640) Create stream\nI0715 00:22:33.158853 1720 log.go:181] (0xc0005aeb00) (0xc000a56640) Stream added, broadcasting: 5\nI0715 00:22:33.159738 1720 log.go:181] (0xc0005aeb00) Reply frame received for 5\nI0715 00:22:33.218594 1720 log.go:181] (0xc0005aeb00) Data frame received for 5\nI0715 00:22:33.218625 1720 log.go:181] (0xc000a56640) (5) Data frame handling\nI0715 00:22:33.218653 1720 log.go:181] (0xc000a56640) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0715 00:22:33.225369 1720 log.go:181] (0xc0005aeb00) Data frame received for 3\nI0715 00:22:33.225398 1720 log.go:181] (0xc000b2f7c0) (3) Data frame handling\nI0715 00:22:33.225420 1720 log.go:181] (0xc000b2f7c0) (3) Data frame sent\nI0715 00:22:33.225601 1720 log.go:181] (0xc0005aeb00) Data frame received for 5\nI0715 00:22:33.225620 1720 log.go:181] (0xc000a56640) (5) Data frame handling\nI0715 00:22:33.226212 1720 log.go:181] (0xc0005aeb00) Data frame received for 3\nI0715 00:22:33.226232 1720 log.go:181] (0xc000b2f7c0) (3) Data frame handling\nI0715 00:22:33.227574 1720 log.go:181] (0xc0005aeb00) Data frame received for 1\nI0715 00:22:33.227620 1720 log.go:181] (0xc000b2f720) (1) Data frame handling\nI0715 00:22:33.227643 1720 log.go:181] (0xc000b2f720) (1) Data frame sent\nI0715 00:22:33.227662 1720 log.go:181] (0xc0005aeb00) (0xc000b2f720) Stream removed, broadcasting: 1\nI0715 00:22:33.227682 1720 log.go:181] (0xc0005aeb00) Go away received\nI0715 00:22:33.228128 1720 log.go:181] (0xc0005aeb00) (0xc000b2f720) Stream removed, broadcasting: 1\nI0715 00:22:33.228149 1720 log.go:181] (0xc0005aeb00) (0xc000b2f7c0) Stream removed, broadcasting: 3\nI0715 00:22:33.228160 1720 log.go:181] (0xc0005aeb00) (0xc000a56640) Stream removed, broadcasting: 5\n" Jul 15 00:22:33.233: INFO: stdout: "iptables" Jul 15 00:22:33.233: INFO: proxyMode: iptables Jul 15 00:22:33.238: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 15 00:22:33.262: INFO: Pod kube-proxy-mode-detector still exists Jul 15 00:22:35.262: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 15 00:22:35.267: INFO: Pod kube-proxy-mode-detector still exists Jul 15 00:22:37.262: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 15 00:22:37.266: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1078 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1078 I0715 00:22:37.396613 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1078, replica count: 3 I0715 00:22:40.447126 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:22:43.447390 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:22:43.453: INFO: Creating new exec pod Jul 15 00:22:48.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 execpod-affinitywf2rc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jul 15 00:22:48.703: INFO: stderr: "I0715 00:22:48.629740 1739 log.go:181] (0xc000ab3550) (0xc000c73cc0) Create stream\nI0715 00:22:48.629783 1739 log.go:181] (0xc000ab3550) (0xc000c73cc0) Stream added, broadcasting: 1\nI0715 00:22:48.632465 1739 log.go:181] (0xc000ab3550) Reply frame received for 1\nI0715 00:22:48.632556 1739 log.go:181] (0xc000ab3550) (0xc000c53040) Create stream\nI0715 00:22:48.632587 1739 log.go:181] (0xc000ab3550) (0xc000c53040) Stream added, broadcasting: 3\nI0715 00:22:48.633964 1739 log.go:181] (0xc000ab3550) Reply frame received for 3\nI0715 00:22:48.634326 1739 log.go:181] (0xc000ab3550) (0xc000c4cdc0) Create stream\nI0715 00:22:48.634340 1739 log.go:181] (0xc000ab3550) (0xc000c4cdc0) Stream added, broadcasting: 5\nI0715 00:22:48.635272 1739 log.go:181] (0xc000ab3550) Reply frame received for 5\nI0715 00:22:48.696376 1739 log.go:181] (0xc000ab3550) Data frame received for 5\nI0715 00:22:48.696408 1739 log.go:181] (0xc000c4cdc0) (5) Data frame handling\nI0715 00:22:48.696426 1739 log.go:181] (0xc000c4cdc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0715 00:22:48.697507 1739 log.go:181] (0xc000ab3550) Data frame received for 5\nI0715 00:22:48.697544 1739 log.go:181] (0xc000c4cdc0) (5) Data frame handling\nI0715 00:22:48.697557 1739 log.go:181] (0xc000c4cdc0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0715 00:22:48.697760 1739 log.go:181] (0xc000ab3550) Data frame received for 5\nI0715 00:22:48.697779 1739 log.go:181] (0xc000c4cdc0) (5) Data frame handling\nI0715 00:22:48.697941 1739 log.go:181] (0xc000ab3550) Data frame received for 3\nI0715 00:22:48.697959 1739 log.go:181] (0xc000c53040) (3) Data frame handling\nI0715 00:22:48.699534 1739 log.go:181] (0xc000ab3550) Data frame received for 1\nI0715 00:22:48.699548 1739 log.go:181] (0xc000c73cc0) (1) Data frame handling\nI0715 00:22:48.699559 1739 log.go:181] (0xc000c73cc0) (1) Data frame sent\nI0715 00:22:48.699655 1739 log.go:181] (0xc000ab3550) (0xc000c73cc0) Stream removed, broadcasting: 1\nI0715 00:22:48.699858 1739 log.go:181] (0xc000ab3550) Go away received\nI0715 00:22:48.700023 1739 log.go:181] (0xc000ab3550) (0xc000c73cc0) Stream removed, broadcasting: 1\nI0715 00:22:48.700039 1739 log.go:181] (0xc000ab3550) (0xc000c53040) Stream removed, broadcasting: 3\nI0715 00:22:48.700047 1739 log.go:181] (0xc000ab3550) (0xc000c4cdc0) Stream removed, broadcasting: 5\n" Jul 15 00:22:48.703: INFO: stdout: "" Jul 15 00:22:48.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 execpod-affinitywf2rc -- /bin/sh -x -c nc -zv -t -w 2 10.99.231.155 80' Jul 15 00:22:48.913: INFO: stderr: "I0715 00:22:48.848364 1758 log.go:181] (0xc0007bf550) (0xc0007b6aa0) Create stream\nI0715 00:22:48.848417 1758 log.go:181] (0xc0007bf550) (0xc0007b6aa0) Stream added, broadcasting: 1\nI0715 00:22:48.853293 1758 log.go:181] (0xc0007bf550) Reply frame received for 1\nI0715 00:22:48.853327 1758 log.go:181] (0xc0007bf550) (0xc000abd220) Create stream\nI0715 00:22:48.853337 1758 log.go:181] (0xc0007bf550) (0xc000abd220) Stream added, broadcasting: 3\nI0715 00:22:48.854352 1758 log.go:181] (0xc0007bf550) Reply frame received for 3\nI0715 00:22:48.854383 1758 log.go:181] (0xc0007bf550) (0xc000852a00) Create stream\nI0715 00:22:48.854394 1758 log.go:181] (0xc0007bf550) (0xc000852a00) Stream added, broadcasting: 5\nI0715 00:22:48.855430 1758 log.go:181] (0xc0007bf550) Reply frame received for 5\nI0715 00:22:48.906767 1758 log.go:181] (0xc0007bf550) Data frame received for 3\nI0715 00:22:48.906824 1758 log.go:181] (0xc000abd220) (3) Data frame handling\nI0715 00:22:48.906866 1758 log.go:181] (0xc0007bf550) Data frame received for 5\nI0715 00:22:48.906896 1758 log.go:181] (0xc000852a00) (5) Data frame handling\nI0715 00:22:48.906935 1758 log.go:181] (0xc000852a00) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.231.155 80\nConnection to 10.99.231.155 80 port [tcp/http] succeeded!\nI0715 00:22:48.906957 1758 log.go:181] (0xc0007bf550) Data frame received for 5\nI0715 00:22:48.906978 1758 log.go:181] (0xc000852a00) (5) Data frame handling\nI0715 00:22:48.908288 1758 log.go:181] (0xc0007bf550) Data frame received for 1\nI0715 00:22:48.908314 1758 log.go:181] (0xc0007b6aa0) (1) Data frame handling\nI0715 00:22:48.908327 1758 log.go:181] (0xc0007b6aa0) (1) Data frame sent\nI0715 00:22:48.908344 1758 log.go:181] (0xc0007bf550) (0xc0007b6aa0) Stream removed, broadcasting: 1\nI0715 00:22:48.908383 1758 log.go:181] (0xc0007bf550) Go away received\nI0715 00:22:48.908964 1758 log.go:181] (0xc0007bf550) (0xc0007b6aa0) Stream removed, broadcasting: 1\nI0715 00:22:48.908987 1758 log.go:181] (0xc0007bf550) (0xc000abd220) Stream removed, broadcasting: 3\nI0715 00:22:48.908999 1758 log.go:181] (0xc0007bf550) (0xc000852a00) Stream removed, broadcasting: 5\n" Jul 15 00:22:48.913: INFO: stdout: "" Jul 15 00:22:48.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 execpod-affinitywf2rc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.231.155:80/ ; done' Jul 15 00:22:49.220: INFO: stderr: "I0715 00:22:49.051494 1776 log.go:181] (0xc0005cc160) (0xc000c0f900) Create stream\nI0715 00:22:49.051583 1776 log.go:181] (0xc0005cc160) (0xc000c0f900) Stream added, broadcasting: 1\nI0715 00:22:49.053630 1776 log.go:181] (0xc0005cc160) Reply frame received for 1\nI0715 00:22:49.053681 1776 log.go:181] (0xc0005cc160) (0xc000c18b40) Create stream\nI0715 00:22:49.053695 1776 log.go:181] (0xc0005cc160) (0xc000c18b40) Stream added, broadcasting: 3\nI0715 00:22:49.054693 1776 log.go:181] (0xc0005cc160) Reply frame received for 3\nI0715 00:22:49.054728 1776 log.go:181] (0xc0005cc160) (0xc000a15040) Create stream\nI0715 00:22:49.054740 1776 log.go:181] (0xc0005cc160) (0xc000a15040) Stream added, broadcasting: 5\nI0715 00:22:49.055852 1776 log.go:181] (0xc0005cc160) Reply frame received for 5\nI0715 00:22:49.125478 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.125519 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.125532 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.125557 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.125566 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.125575 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.128613 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.128631 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.128647 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.129111 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.129137 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.129144 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.129155 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.129160 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.129169 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.133436 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.133448 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.133458 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.133914 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.133934 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.133956 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.133968 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.133983 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.133996 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.137665 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.137682 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.137694 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.138150 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.138166 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.138173 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.138183 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.138188 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.138195 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0715 00:22:49.138202 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.138220 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.138236 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n http://10.99.231.155:80/\nI0715 00:22:49.142435 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.142454 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.142482 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.142795 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.142807 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.142813 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.142827 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.142838 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.142848 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.142856 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.142862 1776 log.go:181] (0xc000a15040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.142911 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.146921 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.146933 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.146941 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.147451 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.147472 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.147491 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.147498 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.147511 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.147528 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.151027 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.151040 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.151049 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.151853 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.151883 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.151898 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.151924 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.151935 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.151958 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.156415 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.156433 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.156461 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.157024 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.157039 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.157049 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.157061 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.157068 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.157076 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.161410 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.161433 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.161450 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.162159 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.162172 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.162183 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.162219 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.162233 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.162241 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.166430 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.166454 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.166476 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.166880 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.166902 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.166914 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.166932 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.166942 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.166951 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.172327 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.172349 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.172366 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.173136 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.173234 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.173280 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.173293 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.173302 1776 log.go:181] (0xc000a15040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.173323 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.173371 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.173389 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.173411 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.178182 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.178201 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.178216 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.178794 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.178831 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.178853 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.178889 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.178908 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.178932 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.183359 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.183373 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.183381 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.184204 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.184224 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.184241 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.184251 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.184260 1776 log.go:181] (0xc000a15040) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.184276 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.184297 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.184308 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.184315 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.191120 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.191149 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.191170 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.192051 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.192076 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.192105 1776 log.go:181] (0xc000a15040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.192365 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.192397 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.192424 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.199693 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.199733 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.199753 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.200324 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.200356 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.200377 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.200398 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.200416 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.200434 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.200450 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.200465 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.200515 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.206487 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.206511 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.206540 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.207399 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.207430 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.207456 1776 log.go:181] (0xc000a15040) (5) Data frame sent\nI0715 00:22:49.207522 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.207540 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.207551 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.211174 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.211199 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.211218 1776 log.go:181] (0xc000c18b40) (3) Data frame sent\nI0715 00:22:49.212178 1776 log.go:181] (0xc0005cc160) Data frame received for 5\nI0715 00:22:49.212200 1776 log.go:181] (0xc000a15040) (5) Data frame handling\nI0715 00:22:49.212548 1776 log.go:181] (0xc0005cc160) Data frame received for 3\nI0715 00:22:49.212558 1776 log.go:181] (0xc000c18b40) (3) Data frame handling\nI0715 00:22:49.214511 1776 log.go:181] (0xc0005cc160) Data frame received for 1\nI0715 00:22:49.214544 1776 log.go:181] (0xc000c0f900) (1) Data frame handling\nI0715 00:22:49.214566 1776 log.go:181] (0xc000c0f900) (1) Data frame sent\nI0715 00:22:49.214587 1776 log.go:181] (0xc0005cc160) (0xc000c0f900) Stream removed, broadcasting: 1\nI0715 00:22:49.214745 1776 log.go:181] (0xc0005cc160) Go away received\nI0715 00:22:49.215168 1776 log.go:181] (0xc0005cc160) (0xc000c0f900) Stream removed, broadcasting: 1\nI0715 00:22:49.215191 1776 log.go:181] (0xc0005cc160) (0xc000c18b40) Stream removed, broadcasting: 3\nI0715 00:22:49.215202 1776 log.go:181] (0xc0005cc160) (0xc000a15040) Stream removed, broadcasting: 5\n" Jul 15 00:22:49.221: INFO: stdout: "\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw\naffinity-clusterip-timeout-pfsgw" Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Received response from host: affinity-clusterip-timeout-pfsgw Jul 15 00:22:49.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 execpod-affinitywf2rc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.231.155:80/' Jul 15 00:22:49.468: INFO: stderr: "I0715 00:22:49.378113 1795 log.go:181] (0xc0005f7340) (0xc00032c3c0) Create stream\nI0715 00:22:49.378191 1795 log.go:181] (0xc0005f7340) (0xc00032c3c0) Stream added, broadcasting: 1\nI0715 00:22:49.383167 1795 log.go:181] (0xc0005f7340) Reply frame received for 1\nI0715 00:22:49.383220 1795 log.go:181] (0xc0005f7340) (0xc000d1d180) Create stream\nI0715 00:22:49.383233 1795 log.go:181] (0xc0005f7340) (0xc000d1d180) Stream added, broadcasting: 3\nI0715 00:22:49.384185 1795 log.go:181] (0xc0005f7340) Reply frame received for 3\nI0715 00:22:49.384249 1795 log.go:181] (0xc0005f7340) (0xc0006be500) Create stream\nI0715 00:22:49.384280 1795 log.go:181] (0xc0005f7340) (0xc0006be500) Stream added, broadcasting: 5\nI0715 00:22:49.385370 1795 log.go:181] (0xc0005f7340) Reply frame received for 5\nI0715 00:22:49.455839 1795 log.go:181] (0xc0005f7340) Data frame received for 5\nI0715 00:22:49.455875 1795 log.go:181] (0xc0006be500) (5) Data frame handling\nI0715 00:22:49.455887 1795 log.go:181] (0xc0006be500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:22:49.458169 1795 log.go:181] (0xc0005f7340) Data frame received for 3\nI0715 00:22:49.458186 1795 log.go:181] (0xc000d1d180) (3) Data frame handling\nI0715 00:22:49.458207 1795 log.go:181] (0xc000d1d180) (3) Data frame sent\nI0715 00:22:49.459260 1795 log.go:181] (0xc0005f7340) Data frame received for 3\nI0715 00:22:49.459276 1795 log.go:181] (0xc000d1d180) (3) Data frame handling\nI0715 00:22:49.459398 1795 log.go:181] (0xc0005f7340) Data frame received for 5\nI0715 00:22:49.459431 1795 log.go:181] (0xc0006be500) (5) Data frame handling\nI0715 00:22:49.460920 1795 log.go:181] (0xc0005f7340) Data frame received for 1\nI0715 00:22:49.460939 1795 log.go:181] (0xc00032c3c0) (1) Data frame handling\nI0715 00:22:49.460964 1795 log.go:181] (0xc00032c3c0) (1) Data frame sent\nI0715 00:22:49.461177 1795 log.go:181] (0xc0005f7340) (0xc00032c3c0) Stream removed, broadcasting: 1\nI0715 00:22:49.461384 1795 log.go:181] (0xc0005f7340) Go away received\nI0715 00:22:49.461446 1795 log.go:181] (0xc0005f7340) (0xc00032c3c0) Stream removed, broadcasting: 1\nI0715 00:22:49.461462 1795 log.go:181] (0xc0005f7340) (0xc000d1d180) Stream removed, broadcasting: 3\nI0715 00:22:49.461472 1795 log.go:181] (0xc0005f7340) (0xc0006be500) Stream removed, broadcasting: 5\n" Jul 15 00:22:49.468: INFO: stdout: "affinity-clusterip-timeout-pfsgw" Jul 15 00:23:04.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-1078 execpod-affinitywf2rc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.231.155:80/' Jul 15 00:23:04.705: INFO: stderr: "I0715 00:23:04.605949 1813 log.go:181] (0xc0006c33f0) (0xc000ea2320) Create stream\nI0715 00:23:04.606014 1813 log.go:181] (0xc0006c33f0) (0xc000ea2320) Stream added, broadcasting: 1\nI0715 00:23:04.613646 1813 log.go:181] (0xc0006c33f0) Reply frame received for 1\nI0715 00:23:04.613700 1813 log.go:181] (0xc0006c33f0) (0xc000ac12c0) Create stream\nI0715 00:23:04.613722 1813 log.go:181] (0xc0006c33f0) (0xc000ac12c0) Stream added, broadcasting: 3\nI0715 00:23:04.614783 1813 log.go:181] (0xc0006c33f0) Reply frame received for 3\nI0715 00:23:04.614829 1813 log.go:181] (0xc0006c33f0) (0xc0007123c0) Create stream\nI0715 00:23:04.614841 1813 log.go:181] (0xc0006c33f0) (0xc0007123c0) Stream added, broadcasting: 5\nI0715 00:23:04.615905 1813 log.go:181] (0xc0006c33f0) Reply frame received for 5\nI0715 00:23:04.693768 1813 log.go:181] (0xc0006c33f0) Data frame received for 5\nI0715 00:23:04.693800 1813 log.go:181] (0xc0007123c0) (5) Data frame handling\nI0715 00:23:04.693820 1813 log.go:181] (0xc0007123c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.231.155:80/\nI0715 00:23:04.698691 1813 log.go:181] (0xc0006c33f0) Data frame received for 3\nI0715 00:23:04.698711 1813 log.go:181] (0xc000ac12c0) (3) Data frame handling\nI0715 00:23:04.698718 1813 log.go:181] (0xc000ac12c0) (3) Data frame sent\nI0715 00:23:04.699229 1813 log.go:181] (0xc0006c33f0) Data frame received for 3\nI0715 00:23:04.699250 1813 log.go:181] (0xc000ac12c0) (3) Data frame handling\nI0715 00:23:04.699331 1813 log.go:181] (0xc0006c33f0) Data frame received for 5\nI0715 00:23:04.699342 1813 log.go:181] (0xc0007123c0) (5) Data frame handling\nI0715 00:23:04.701056 1813 log.go:181] (0xc0006c33f0) Data frame received for 1\nI0715 00:23:04.701080 1813 log.go:181] (0xc000ea2320) (1) Data frame handling\nI0715 00:23:04.701092 1813 log.go:181] (0xc000ea2320) (1) Data frame sent\nI0715 00:23:04.701109 1813 log.go:181] (0xc0006c33f0) (0xc000ea2320) Stream removed, broadcasting: 1\nI0715 00:23:04.701208 1813 log.go:181] (0xc0006c33f0) Go away received\nI0715 00:23:04.701513 1813 log.go:181] (0xc0006c33f0) (0xc000ea2320) Stream removed, broadcasting: 1\nI0715 00:23:04.701527 1813 log.go:181] (0xc0006c33f0) (0xc000ac12c0) Stream removed, broadcasting: 3\nI0715 00:23:04.701534 1813 log.go:181] (0xc0006c33f0) (0xc0007123c0) Stream removed, broadcasting: 5\n" Jul 15 00:23:04.705: INFO: stdout: "affinity-clusterip-timeout-fqkrm" Jul 15 00:23:04.705: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1078, will wait for the garbage collector to delete the pods Jul 15 00:23:04.814: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.885116ms Jul 15 00:23:05.214: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.199592ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:23:19.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1078" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:53.361 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":178,"skipped":2844,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:23:19.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 15 00:23:22.449: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:23:22.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3846" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":179,"skipped":2845,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:23:22.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1017.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1017.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1017.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1017.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 122.144.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.144.122_udp@PTR;check="$$(dig +tcp +noall +answer +search 122.144.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.144.122_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1017.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1017.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1017.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1017.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1017.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1017.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 122.144.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.144.122_udp@PTR;check="$$(dig +tcp +noall +answer +search 122.144.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.144.122_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 15 00:23:28.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.887: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.889: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.914: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.918: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.920: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:28.967: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:33.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:33.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:33.981: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:33.985: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:34.011: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:34.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:34.017: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:34.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:34.037: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:38.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:38.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:38.998: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.001: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.022: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.030: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:39.047: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:43.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:43.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:43.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:43.980: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:44.005: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:44.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:44.037: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:44.039: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:44.080: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:48.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:48.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:48.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:48.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:49.000: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:49.003: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:49.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:49.009: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:49.023: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:53.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:53.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:53.981: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:53.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:54.005: INFO: Unable to read jessie_udp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:54.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:54.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:54.014: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local from pod dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9: the server could not find the requested resource (get pods dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9) Jul 15 00:23:54.031: INFO: Lookups using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 failed for: [wheezy_udp@dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@dns-test-service.dns-1017.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_udp@dns-test-service.dns-1017.svc.cluster.local jessie_tcp@dns-test-service.dns-1017.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1017.svc.cluster.local] Jul 15 00:23:59.063: INFO: DNS probes using dns-1017/dns-test-05d9bfe8-8b10-41e1-83b1-1ba47a3779a9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:23:59.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1017" for this suite. • [SLOW TEST:37.219 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":294,"completed":180,"skipped":2863,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:23:59.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:05.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2159" for this suite. • [SLOW TEST:5.180 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":294,"completed":181,"skipped":2871,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:05.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:24:05.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a" in namespace "downward-api-1686" to be "Succeeded or Failed" Jul 15 00:24:05.208: INFO: Pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.791854ms Jul 15 00:24:07.248: INFO: Pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069654723s Jul 15 00:24:09.252: INFO: Pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a": Phase="Running", Reason="", readiness=true. Elapsed: 4.074259282s Jul 15 00:24:11.257: INFO: Pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078925373s STEP: Saw pod success Jul 15 00:24:11.257: INFO: Pod "downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a" satisfied condition "Succeeded or Failed" Jul 15 00:24:11.261: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a container client-container: STEP: delete the pod Jul 15 00:24:11.307: INFO: Waiting for pod downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a to disappear Jul 15 00:24:11.349: INFO: Pod downwardapi-volume-2c70196c-25ad-4b86-95e1-73a46dee015a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:11.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1686" for this suite. • [SLOW TEST:6.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":182,"skipped":2880,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:11.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:11.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2309" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":294,"completed":183,"skipped":2924,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:11.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:11.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9456" for this suite. STEP: Destroying namespace "nspatchtest-610a7612-7181-4c80-baee-e4b9589aee70-8800" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":294,"completed":184,"skipped":2925,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:11.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:24:11.714: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 15 00:24:13.903: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:15.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6942" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":294,"completed":185,"skipped":2928,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:15.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-a892fb9f-736f-4355-b968-7723a4e9f75c STEP: Creating a pod to test consume secrets Jul 15 00:24:15.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b" in namespace "projected-8649" to be "Succeeded or Failed" Jul 15 00:24:15.729: INFO: Pod "pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739409ms Jul 15 00:24:17.849: INFO: Pod "pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123481932s Jul 15 00:24:19.853: INFO: Pod "pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126711508s STEP: Saw pod success Jul 15 00:24:19.853: INFO: Pod "pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b" satisfied condition "Succeeded or Failed" Jul 15 00:24:19.855: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b container projected-secret-volume-test: STEP: delete the pod Jul 15 00:24:19.901: INFO: Waiting for pod pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b to disappear Jul 15 00:24:19.966: INFO: Pod pod-projected-secrets-bbd91507-2aba-44f0-b50e-6ae16d20465b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:19.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8649" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":186,"skipped":2930,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:19.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0715 00:24:32.595057 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 00:24:34.689: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jul 15 00:24:34.689: INFO: Deleting pod "simpletest-rc-to-be-deleted-5x5m4" in namespace "gc-5695" Jul 15 00:24:34.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xfm6" in namespace "gc-5695" Jul 15 00:24:34.814: INFO: Deleting pod "simpletest-rc-to-be-deleted-7j82x" in namespace "gc-5695" Jul 15 00:24:34.925: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9lwl" in namespace "gc-5695" Jul 15 00:24:35.242: INFO: Deleting pod "simpletest-rc-to-be-deleted-c488j" in namespace "gc-5695" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:35.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5695" for this suite. • [SLOW TEST:15.521 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":294,"completed":187,"skipped":2933,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:35.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 15 00:24:40.034: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:40.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5606" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":188,"skipped":3020,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:40.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-4967a893-8c31-42ea-9cc3-0f1504d7d4c1 STEP: Creating a pod to test consume configMaps Jul 15 00:24:40.193: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102" in namespace "projected-3878" to be "Succeeded or Failed" Jul 15 00:24:40.216: INFO: Pod "pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102": Phase="Pending", Reason="", readiness=false. Elapsed: 22.664851ms Jul 15 00:24:42.309: INFO: Pod "pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115684027s Jul 15 00:24:44.314: INFO: Pod "pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120393477s STEP: Saw pod success Jul 15 00:24:44.314: INFO: Pod "pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102" satisfied condition "Succeeded or Failed" Jul 15 00:24:44.317: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102 container projected-configmap-volume-test: STEP: delete the pod Jul 15 00:24:44.447: INFO: Waiting for pod pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102 to disappear Jul 15 00:24:44.455: INFO: Pod pod-projected-configmaps-867544f6-aca4-464a-ac22-10d8ad2ec102 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:44.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3878" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":189,"skipped":3037,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:44.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jul 15 00:24:44.511: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:24:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8293" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":294,"completed":190,"skipped":3046,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:24:44.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 15 00:24:44.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:44.841: INFO: Number of nodes with available pods: 0 Jul 15 00:24:44.841: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:45.846: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:45.850: INFO: Number of nodes with available pods: 0 Jul 15 00:24:45.850: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:46.847: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:46.850: INFO: Number of nodes with available pods: 0 Jul 15 00:24:46.850: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:47.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:47.995: INFO: Number of nodes with available pods: 0 Jul 15 00:24:47.995: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:48.896: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:48.900: INFO: Number of nodes with available pods: 0 Jul 15 00:24:48.900: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:49.855: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:49.859: INFO: Number of nodes with available pods: 2 Jul 15 00:24:49.859: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 15 00:24:49.875: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:49.877: INFO: Number of nodes with available pods: 1 Jul 15 00:24:49.877: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:50.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:50.886: INFO: Number of nodes with available pods: 1 Jul 15 00:24:50.886: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:51.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:51.888: INFO: Number of nodes with available pods: 1 Jul 15 00:24:51.888: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:52.881: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:52.884: INFO: Number of nodes with available pods: 1 Jul 15 00:24:52.884: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:53.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:53.887: INFO: Number of nodes with available pods: 1 Jul 15 00:24:53.887: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:54.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:54.886: INFO: Number of nodes with available pods: 1 Jul 15 00:24:54.886: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:55.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:55.887: INFO: Number of nodes with available pods: 1 Jul 15 00:24:55.887: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:56.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:56.888: INFO: Number of nodes with available pods: 1 Jul 15 00:24:56.888: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:57.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:57.886: INFO: Number of nodes with available pods: 1 Jul 15 00:24:57.886: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:58.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:58.887: INFO: Number of nodes with available pods: 1 Jul 15 00:24:58.887: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:24:59.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:24:59.886: INFO: Number of nodes with available pods: 1 Jul 15 00:24:59.886: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:25:00.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:25:00.963: INFO: Number of nodes with available pods: 1 Jul 15 00:25:00.963: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:25:01.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:25:01.885: INFO: Number of nodes with available pods: 1 Jul 15 00:25:01.885: INFO: Node latest-worker is running more than one daemon pod Jul 15 00:25:02.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 00:25:02.886: INFO: Number of nodes with available pods: 2 Jul 15 00:25:02.886: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9064, will wait for the garbage collector to delete the pods Jul 15 00:25:02.948: INFO: Deleting DaemonSet.extensions daemon-set took: 6.422892ms Jul 15 00:25:05.048: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.1002524s Jul 15 00:25:07.667: INFO: Number of nodes with available pods: 0 Jul 15 00:25:07.667: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 00:25:07.671: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9064/daemonsets","resourceVersion":"1229955"},"items":null} Jul 15 00:25:07.673: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9064/pods","resourceVersion":"1229955"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:07.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9064" for this suite. • [SLOW TEST:23.078 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":294,"completed":191,"skipped":3048,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:07.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-95e428b5-3d7c-4160-b536-80e21066646b STEP: Creating a pod to test consume configMaps Jul 15 00:25:07.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072" in namespace "configmap-9093" to be "Succeeded or Failed" Jul 15 00:25:07.809: INFO: Pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966897ms Jul 15 00:25:09.812: INFO: Pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006724692s Jul 15 00:25:11.834: INFO: Pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072": Phase="Running", Reason="", readiness=true. Elapsed: 4.028185608s Jul 15 00:25:13.838: INFO: Pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032077023s STEP: Saw pod success Jul 15 00:25:13.838: INFO: Pod "pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072" satisfied condition "Succeeded or Failed" Jul 15 00:25:13.841: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072 container configmap-volume-test: STEP: delete the pod Jul 15 00:25:13.975: INFO: Waiting for pod pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072 to disappear Jul 15 00:25:13.983: INFO: Pod pod-configmaps-2aaaf317-6176-4426-b396-c2fa724c5072 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:13.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9093" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":192,"skipped":3050,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:13.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 15 00:25:14.100: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:21.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4276" for this suite. • [SLOW TEST:7.648 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":294,"completed":193,"skipped":3065,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:21.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 15 00:25:21.733: INFO: Waiting up to 5m0s for pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75" in namespace "emptydir-3872" to be "Succeeded or Failed" Jul 15 00:25:21.759: INFO: Pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75": Phase="Pending", Reason="", readiness=false. Elapsed: 25.633137ms Jul 15 00:25:23.801: INFO: Pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067307351s Jul 15 00:25:25.805: INFO: Pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75": Phase="Running", Reason="", readiness=true. Elapsed: 4.071394739s Jul 15 00:25:27.809: INFO: Pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075642199s STEP: Saw pod success Jul 15 00:25:27.809: INFO: Pod "pod-8fdc2301-fccc-41a7-942d-4eb11104dd75" satisfied condition "Succeeded or Failed" Jul 15 00:25:27.812: INFO: Trying to get logs from node latest-worker pod pod-8fdc2301-fccc-41a7-942d-4eb11104dd75 container test-container: STEP: delete the pod Jul 15 00:25:27.861: INFO: Waiting for pod pod-8fdc2301-fccc-41a7-942d-4eb11104dd75 to disappear Jul 15 00:25:27.875: INFO: Pod pod-8fdc2301-fccc-41a7-942d-4eb11104dd75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:27.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3872" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":194,"skipped":3068,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:27.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:27.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-372" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":294,"completed":195,"skipped":3121,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:27.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:32.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5531" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":294,"completed":196,"skipped":3136,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:32.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-7ea665d9-a778-4621-a8e0-fa9667f2bba7 STEP: Creating a pod to test consume secrets Jul 15 00:25:32.186: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a" in namespace "projected-5782" to be "Succeeded or Failed" Jul 15 00:25:32.218: INFO: Pod "pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.10242ms Jul 15 00:25:34.223: INFO: Pod "pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036761914s Jul 15 00:25:36.227: INFO: Pod "pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041395174s STEP: Saw pod success Jul 15 00:25:36.227: INFO: Pod "pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a" satisfied condition "Succeeded or Failed" Jul 15 00:25:36.230: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a container projected-secret-volume-test: STEP: delete the pod Jul 15 00:25:36.262: INFO: Waiting for pod pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a to disappear Jul 15 00:25:36.271: INFO: Pod pod-projected-secrets-83c513bc-a729-4bb1-ae21-3b951428198a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:36.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5782" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":197,"skipped":3141,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:36.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:25:36.361: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:25:37.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4770" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":294,"completed":198,"skipped":3151,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:25:37.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4903 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4903 STEP: Creating statefulset with conflicting port in namespace statefulset-4903 STEP: Waiting until pod test-pod will start running in namespace statefulset-4903 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4903 Jul 15 00:25:41.731: INFO: Observed stateful pod in namespace: statefulset-4903, name: ss-0, uid: 085b91e2-d968-470a-bed1-8df3516f4a74, status phase: Pending. Waiting for statefulset controller to delete. Jul 15 00:25:41.895: INFO: Observed stateful pod in namespace: statefulset-4903, name: ss-0, uid: 085b91e2-d968-470a-bed1-8df3516f4a74, status phase: Failed. Waiting for statefulset controller to delete. Jul 15 00:25:41.934: INFO: Observed stateful pod in namespace: statefulset-4903, name: ss-0, uid: 085b91e2-d968-470a-bed1-8df3516f4a74, status phase: Failed. Waiting for statefulset controller to delete. Jul 15 00:25:41.950: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4903 STEP: Removing pod with conflicting port in namespace statefulset-4903 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4903 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 15 00:25:48.547: INFO: Deleting all statefulset in ns statefulset-4903 Jul 15 00:25:48.550: INFO: Scaling statefulset ss to 0 Jul 15 00:26:08.580: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:26:08.584: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:26:08.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4903" for this suite. • [SLOW TEST:31.031 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":294,"completed":199,"skipped":3153,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:26:08.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-50b2fd99-93e1-47b2-b7a2-5ee201c49ad9 STEP: Creating a pod to test consume configMaps Jul 15 00:26:08.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0" in namespace "configmap-8298" to be "Succeeded or Failed" Jul 15 00:26:08.885: INFO: Pod "pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0": Phase="Pending", Reason="", readiness=false. Elapsed: 39.865824ms Jul 15 00:26:10.889: INFO: Pod "pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043986853s Jul 15 00:26:12.893: INFO: Pod "pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047933464s STEP: Saw pod success Jul 15 00:26:12.893: INFO: Pod "pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0" satisfied condition "Succeeded or Failed" Jul 15 00:26:12.895: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0 container configmap-volume-test: STEP: delete the pod Jul 15 00:26:13.017: INFO: Waiting for pod pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0 to disappear Jul 15 00:26:13.045: INFO: Pod pod-configmaps-c6ead94a-cfd8-49d6-b017-327319b792e0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:26:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8298" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":200,"skipped":3158,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:26:13.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 15 00:26:13.240: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 00:26:13.267: INFO: Waiting for terminating namespaces to be deleted... Jul 15 00:26:13.269: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 15 00:26:13.275: INFO: kindnet-qt4jk from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:26:13.275: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:26:13.275: INFO: kube-proxy-xb9q4 from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:26:13.275: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:26:13.275: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 15 00:26:13.299: INFO: kindnet-gkkxx from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:26:13.299: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:26:13.299: INFO: kube-proxy-s596l from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:26:13.299: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1621c5634b957404], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1621c5634dadaa9b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:26:14.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3698" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":294,"completed":201,"skipped":3198,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:26:14.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4937 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4937 I0715 00:26:14.578474 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4937, replica count: 2 I0715 00:26:17.629061 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:26:20.629351 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:26:20.629: INFO: Creating new exec pod Jul 15 00:26:25.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4937 execpodszj79 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 15 00:26:25.885: INFO: stderr: "I0715 00:26:25.782708 1849 log.go:181] (0xc00003afd0) (0xc000da5720) Create stream\nI0715 00:26:25.782758 1849 log.go:181] (0xc00003afd0) (0xc000da5720) Stream added, broadcasting: 1\nI0715 00:26:25.788098 1849 log.go:181] (0xc00003afd0) Reply frame received for 1\nI0715 00:26:25.788149 1849 log.go:181] (0xc00003afd0) (0xc000d788c0) Create stream\nI0715 00:26:25.788174 1849 log.go:181] (0xc00003afd0) (0xc000d788c0) Stream added, broadcasting: 3\nI0715 00:26:25.789529 1849 log.go:181] (0xc00003afd0) Reply frame received for 3\nI0715 00:26:25.789576 1849 log.go:181] (0xc00003afd0) (0xc000528460) Create stream\nI0715 00:26:25.789595 1849 log.go:181] (0xc00003afd0) (0xc000528460) Stream added, broadcasting: 5\nI0715 00:26:25.790588 1849 log.go:181] (0xc00003afd0) Reply frame received for 5\nI0715 00:26:25.877004 1849 log.go:181] (0xc00003afd0) Data frame received for 5\nI0715 00:26:25.877058 1849 log.go:181] (0xc000528460) (5) Data frame handling\nI0715 00:26:25.877081 1849 log.go:181] (0xc000528460) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0715 00:26:25.877315 1849 log.go:181] (0xc00003afd0) Data frame received for 5\nI0715 00:26:25.877332 1849 log.go:181] (0xc000528460) (5) Data frame handling\nI0715 00:26:25.877350 1849 log.go:181] (0xc000528460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0715 00:26:25.877780 1849 log.go:181] (0xc00003afd0) Data frame received for 3\nI0715 00:26:25.877800 1849 log.go:181] (0xc000d788c0) (3) Data frame handling\nI0715 00:26:25.877936 1849 log.go:181] (0xc00003afd0) Data frame received for 5\nI0715 00:26:25.877964 1849 log.go:181] (0xc000528460) (5) Data frame handling\nI0715 00:26:25.880114 1849 log.go:181] (0xc00003afd0) Data frame received for 1\nI0715 00:26:25.880138 1849 log.go:181] (0xc000da5720) (1) Data frame handling\nI0715 00:26:25.880151 1849 log.go:181] (0xc000da5720) (1) Data frame sent\nI0715 00:26:25.880165 1849 log.go:181] (0xc00003afd0) (0xc000da5720) Stream removed, broadcasting: 1\nI0715 00:26:25.880329 1849 log.go:181] (0xc00003afd0) Go away received\nI0715 00:26:25.880602 1849 log.go:181] (0xc00003afd0) (0xc000da5720) Stream removed, broadcasting: 1\nI0715 00:26:25.880629 1849 log.go:181] (0xc00003afd0) (0xc000d788c0) Stream removed, broadcasting: 3\nI0715 00:26:25.880645 1849 log.go:181] (0xc00003afd0) (0xc000528460) Stream removed, broadcasting: 5\n" Jul 15 00:26:25.886: INFO: stdout: "" Jul 15 00:26:25.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4937 execpodszj79 -- /bin/sh -x -c nc -zv -t -w 2 10.105.250.136 80' Jul 15 00:26:26.096: INFO: stderr: "I0715 00:26:26.022078 1867 log.go:181] (0xc0008aefd0) (0xc000864e60) Create stream\nI0715 00:26:26.022136 1867 log.go:181] (0xc0008aefd0) (0xc000864e60) Stream added, broadcasting: 1\nI0715 00:26:26.025528 1867 log.go:181] (0xc0008aefd0) Reply frame received for 1\nI0715 00:26:26.025592 1867 log.go:181] (0xc0008aefd0) (0xc00091e320) Create stream\nI0715 00:26:26.025621 1867 log.go:181] (0xc0008aefd0) (0xc00091e320) Stream added, broadcasting: 3\nI0715 00:26:26.027176 1867 log.go:181] (0xc0008aefd0) Reply frame received for 3\nI0715 00:26:26.027196 1867 log.go:181] (0xc0008aefd0) (0xc00091e3c0) Create stream\nI0715 00:26:26.027203 1867 log.go:181] (0xc0008aefd0) (0xc00091e3c0) Stream added, broadcasting: 5\nI0715 00:26:26.028005 1867 log.go:181] (0xc0008aefd0) Reply frame received for 5\nI0715 00:26:26.087922 1867 log.go:181] (0xc0008aefd0) Data frame received for 5\nI0715 00:26:26.087953 1867 log.go:181] (0xc00091e3c0) (5) Data frame handling\nI0715 00:26:26.087974 1867 log.go:181] (0xc00091e3c0) (5) Data frame sent\nI0715 00:26:26.087979 1867 log.go:181] (0xc0008aefd0) Data frame received for 5\nI0715 00:26:26.087985 1867 log.go:181] (0xc00091e3c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.250.136 80\nConnection to 10.105.250.136 80 port [tcp/http] succeeded!\nI0715 00:26:26.088001 1867 log.go:181] (0xc0008aefd0) Data frame received for 3\nI0715 00:26:26.088019 1867 log.go:181] (0xc00091e320) (3) Data frame handling\nI0715 00:26:26.089732 1867 log.go:181] (0xc0008aefd0) Data frame received for 1\nI0715 00:26:26.089789 1867 log.go:181] (0xc000864e60) (1) Data frame handling\nI0715 00:26:26.089832 1867 log.go:181] (0xc000864e60) (1) Data frame sent\nI0715 00:26:26.089878 1867 log.go:181] (0xc0008aefd0) (0xc000864e60) Stream removed, broadcasting: 1\nI0715 00:26:26.089901 1867 log.go:181] (0xc0008aefd0) Go away received\nI0715 00:26:26.090798 1867 log.go:181] (0xc0008aefd0) (0xc000864e60) Stream removed, broadcasting: 1\nI0715 00:26:26.090836 1867 log.go:181] (0xc0008aefd0) (0xc00091e320) Stream removed, broadcasting: 3\nI0715 00:26:26.090850 1867 log.go:181] (0xc0008aefd0) (0xc00091e3c0) Stream removed, broadcasting: 5\n" Jul 15 00:26:26.096: INFO: stdout: "" Jul 15 00:26:26.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4937 execpodszj79 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31794' Jul 15 00:26:26.289: INFO: stderr: "I0715 00:26:26.212605 1884 log.go:181] (0xc000e1afd0) (0xc000e94500) Create stream\nI0715 00:26:26.212652 1884 log.go:181] (0xc000e1afd0) (0xc000e94500) Stream added, broadcasting: 1\nI0715 00:26:26.217524 1884 log.go:181] (0xc000e1afd0) Reply frame received for 1\nI0715 00:26:26.217569 1884 log.go:181] (0xc000e1afd0) (0xc000b19180) Create stream\nI0715 00:26:26.217583 1884 log.go:181] (0xc000e1afd0) (0xc000b19180) Stream added, broadcasting: 3\nI0715 00:26:26.218412 1884 log.go:181] (0xc000e1afd0) Reply frame received for 3\nI0715 00:26:26.218450 1884 log.go:181] (0xc000e1afd0) (0xc000390500) Create stream\nI0715 00:26:26.218462 1884 log.go:181] (0xc000e1afd0) (0xc000390500) Stream added, broadcasting: 5\nI0715 00:26:26.219529 1884 log.go:181] (0xc000e1afd0) Reply frame received for 5\nI0715 00:26:26.282038 1884 log.go:181] (0xc000e1afd0) Data frame received for 3\nI0715 00:26:26.282076 1884 log.go:181] (0xc000b19180) (3) Data frame handling\nI0715 00:26:26.282104 1884 log.go:181] (0xc000e1afd0) Data frame received for 5\nI0715 00:26:26.282115 1884 log.go:181] (0xc000390500) (5) Data frame handling\nI0715 00:26:26.282127 1884 log.go:181] (0xc000390500) (5) Data frame sent\nI0715 00:26:26.282138 1884 log.go:181] (0xc000e1afd0) Data frame received for 5\nI0715 00:26:26.282147 1884 log.go:181] (0xc000390500) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31794\nConnection to 172.18.0.14 31794 port [tcp/31794] succeeded!\nI0715 00:26:26.283580 1884 log.go:181] (0xc000e1afd0) Data frame received for 1\nI0715 00:26:26.283604 1884 log.go:181] (0xc000e94500) (1) Data frame handling\nI0715 00:26:26.283614 1884 log.go:181] (0xc000e94500) (1) Data frame sent\nI0715 00:26:26.283627 1884 log.go:181] (0xc000e1afd0) (0xc000e94500) Stream removed, broadcasting: 1\nI0715 00:26:26.283835 1884 log.go:181] (0xc000e1afd0) Go away received\nI0715 00:26:26.284002 1884 log.go:181] (0xc000e1afd0) (0xc000e94500) Stream removed, broadcasting: 1\nI0715 00:26:26.284030 1884 log.go:181] (0xc000e1afd0) (0xc000b19180) Stream removed, broadcasting: 3\nI0715 00:26:26.284046 1884 log.go:181] (0xc000e1afd0) (0xc000390500) Stream removed, broadcasting: 5\n" Jul 15 00:26:26.289: INFO: stdout: "" Jul 15 00:26:26.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4937 execpodszj79 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31794' Jul 15 00:26:26.496: INFO: stderr: "I0715 00:26:26.420469 1902 log.go:181] (0xc000543130) (0xc000a57540) Create stream\nI0715 00:26:26.420543 1902 log.go:181] (0xc000543130) (0xc000a57540) Stream added, broadcasting: 1\nI0715 00:26:26.426955 1902 log.go:181] (0xc000543130) Reply frame received for 1\nI0715 00:26:26.426988 1902 log.go:181] (0xc000543130) (0xc000a29180) Create stream\nI0715 00:26:26.426997 1902 log.go:181] (0xc000543130) (0xc000a29180) Stream added, broadcasting: 3\nI0715 00:26:26.427794 1902 log.go:181] (0xc000543130) Reply frame received for 3\nI0715 00:26:26.427821 1902 log.go:181] (0xc000543130) (0xc0009aa6e0) Create stream\nI0715 00:26:26.427830 1902 log.go:181] (0xc000543130) (0xc0009aa6e0) Stream added, broadcasting: 5\nI0715 00:26:26.428612 1902 log.go:181] (0xc000543130) Reply frame received for 5\nI0715 00:26:26.487893 1902 log.go:181] (0xc000543130) Data frame received for 5\nI0715 00:26:26.487927 1902 log.go:181] (0xc0009aa6e0) (5) Data frame handling\nI0715 00:26:26.487949 1902 log.go:181] (0xc0009aa6e0) (5) Data frame sent\nI0715 00:26:26.487957 1902 log.go:181] (0xc000543130) Data frame received for 5\nI0715 00:26:26.487964 1902 log.go:181] (0xc0009aa6e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31794\nConnection to 172.18.0.11 31794 port [tcp/31794] succeeded!\nI0715 00:26:26.488053 1902 log.go:181] (0xc000543130) Data frame received for 3\nI0715 00:26:26.488089 1902 log.go:181] (0xc000a29180) (3) Data frame handling\nI0715 00:26:26.489811 1902 log.go:181] (0xc000543130) Data frame received for 1\nI0715 00:26:26.489848 1902 log.go:181] (0xc000a57540) (1) Data frame handling\nI0715 00:26:26.489869 1902 log.go:181] (0xc000a57540) (1) Data frame sent\nI0715 00:26:26.489893 1902 log.go:181] (0xc000543130) (0xc000a57540) Stream removed, broadcasting: 1\nI0715 00:26:26.489916 1902 log.go:181] (0xc000543130) Go away received\nI0715 00:26:26.490282 1902 log.go:181] (0xc000543130) (0xc000a57540) Stream removed, broadcasting: 1\nI0715 00:26:26.490302 1902 log.go:181] (0xc000543130) (0xc000a29180) Stream removed, broadcasting: 3\nI0715 00:26:26.490311 1902 log.go:181] (0xc000543130) (0xc0009aa6e0) Stream removed, broadcasting: 5\n" Jul 15 00:26:26.496: INFO: stdout: "" Jul 15 00:26:26.496: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:26:26.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4937" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:12.239 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":294,"completed":202,"skipped":3220,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:26:26.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jul 15 00:26:30.696: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-805 PodName:var-expansion-5eadfa7a-1096-4cc3-8ffc-f282c0f60391 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:26:30.696: INFO: >>> kubeConfig: /root/.kube/config I0715 00:26:30.733511 7 log.go:181] (0xc00206cbb0) (0xc0033a99a0) Create stream I0715 00:26:30.733537 7 log.go:181] (0xc00206cbb0) (0xc0033a99a0) Stream added, broadcasting: 1 I0715 00:26:30.735099 7 log.go:181] (0xc00206cbb0) Reply frame received for 1 I0715 00:26:30.735143 7 log.go:181] (0xc00206cbb0) (0xc002e4bf40) Create stream I0715 00:26:30.735154 7 log.go:181] (0xc00206cbb0) (0xc002e4bf40) Stream added, broadcasting: 3 I0715 00:26:30.736025 7 log.go:181] (0xc00206cbb0) Reply frame received for 3 I0715 00:26:30.736051 7 log.go:181] (0xc00206cbb0) (0xc0032590e0) Create stream I0715 00:26:30.736059 7 log.go:181] (0xc00206cbb0) (0xc0032590e0) Stream added, broadcasting: 5 I0715 00:26:30.736871 7 log.go:181] (0xc00206cbb0) Reply frame received for 5 I0715 00:26:30.794165 7 log.go:181] (0xc00206cbb0) Data frame received for 5 I0715 00:26:30.794189 7 log.go:181] (0xc0032590e0) (5) Data frame handling I0715 00:26:30.794252 7 log.go:181] (0xc00206cbb0) Data frame received for 3 I0715 00:26:30.794286 7 log.go:181] (0xc002e4bf40) (3) Data frame handling I0715 00:26:30.795599 7 log.go:181] (0xc00206cbb0) Data frame received for 1 I0715 00:26:30.795623 7 log.go:181] (0xc0033a99a0) (1) Data frame handling I0715 00:26:30.795641 7 log.go:181] (0xc0033a99a0) (1) Data frame sent I0715 00:26:30.795663 7 log.go:181] (0xc00206cbb0) (0xc0033a99a0) Stream removed, broadcasting: 1 I0715 00:26:30.795689 7 log.go:181] (0xc00206cbb0) Go away received I0715 00:26:30.795814 7 log.go:181] (0xc00206cbb0) (0xc0033a99a0) Stream removed, broadcasting: 1 I0715 00:26:30.795830 7 log.go:181] (0xc00206cbb0) (0xc002e4bf40) Stream removed, broadcasting: 3 I0715 00:26:30.795845 7 log.go:181] (0xc00206cbb0) (0xc0032590e0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jul 15 00:26:30.799: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-805 PodName:var-expansion-5eadfa7a-1096-4cc3-8ffc-f282c0f60391 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:26:30.799: INFO: >>> kubeConfig: /root/.kube/config I0715 00:26:30.828119 7 log.go:181] (0xc00414a210) (0xc003259360) Create stream I0715 00:26:30.828142 7 log.go:181] (0xc00414a210) (0xc003259360) Stream added, broadcasting: 1 I0715 00:26:30.831419 7 log.go:181] (0xc00414a210) Reply frame received for 1 I0715 00:26:30.831519 7 log.go:181] (0xc00414a210) (0xc003259400) Create stream I0715 00:26:30.831551 7 log.go:181] (0xc00414a210) (0xc003259400) Stream added, broadcasting: 3 I0715 00:26:30.833129 7 log.go:181] (0xc00414a210) Reply frame received for 3 I0715 00:26:30.833168 7 log.go:181] (0xc00414a210) (0xc002d55040) Create stream I0715 00:26:30.833184 7 log.go:181] (0xc00414a210) (0xc002d55040) Stream added, broadcasting: 5 I0715 00:26:30.834020 7 log.go:181] (0xc00414a210) Reply frame received for 5 I0715 00:26:30.884921 7 log.go:181] (0xc00414a210) Data frame received for 3 I0715 00:26:30.884960 7 log.go:181] (0xc003259400) (3) Data frame handling I0715 00:26:30.884996 7 log.go:181] (0xc00414a210) Data frame received for 5 I0715 00:26:30.885026 7 log.go:181] (0xc002d55040) (5) Data frame handling I0715 00:26:30.886157 7 log.go:181] (0xc00414a210) Data frame received for 1 I0715 00:26:30.886202 7 log.go:181] (0xc003259360) (1) Data frame handling I0715 00:26:30.886232 7 log.go:181] (0xc003259360) (1) Data frame sent I0715 00:26:30.886256 7 log.go:181] (0xc00414a210) (0xc003259360) Stream removed, broadcasting: 1 I0715 00:26:30.886272 7 log.go:181] (0xc00414a210) Go away received I0715 00:26:30.886430 7 log.go:181] (0xc00414a210) (0xc003259360) Stream removed, broadcasting: 1 I0715 00:26:30.886453 7 log.go:181] (0xc00414a210) (0xc003259400) Stream removed, broadcasting: 3 I0715 00:26:30.886467 7 log.go:181] (0xc00414a210) (0xc002d55040) Stream removed, broadcasting: 5 STEP: updating the annotation value Jul 15 00:26:31.392: INFO: Successfully updated pod "var-expansion-5eadfa7a-1096-4cc3-8ffc-f282c0f60391" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jul 15 00:26:31.399: INFO: Deleting pod "var-expansion-5eadfa7a-1096-4cc3-8ffc-f282c0f60391" in namespace "var-expansion-805" Jul 15 00:26:31.402: INFO: Wait up to 5m0s for pod "var-expansion-5eadfa7a-1096-4cc3-8ffc-f282c0f60391" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:27:09.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-805" for this suite. • [SLOW TEST:42.901 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":294,"completed":203,"skipped":3249,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:27:09.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410 STEP: creating an pod Jul 15 00:27:09.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 --namespace=kubectl-4853 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 15 00:27:09.656: INFO: stderr: "" Jul 15 00:27:09.656: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jul 15 00:27:09.656: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 15 00:27:09.657: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4853" to be "running and ready, or succeeded" Jul 15 00:27:09.681: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.491161ms Jul 15 00:27:11.685: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028367521s Jul 15 00:27:13.689: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.03229232s Jul 15 00:27:13.689: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 15 00:27:13.689: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 15 00:27:13.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853' Jul 15 00:27:13.812: INFO: stderr: "" Jul 15 00:27:13.812: INFO: stdout: "I0715 00:27:12.220006 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/w9gw 291\nI0715 00:27:12.420191 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/p2h 471\nI0715 00:27:12.620074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qbmx 326\nI0715 00:27:12.820144 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/pdc 345\nI0715 00:27:13.020183 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/78p 519\nI0715 00:27:13.220180 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/l5d 401\nI0715 00:27:13.420139 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/v4sl 444\nI0715 00:27:13.620185 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/r8n 500\n" STEP: limiting log lines Jul 15 00:27:13.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853 --tail=1' Jul 15 00:27:13.933: INFO: stderr: "" Jul 15 00:27:13.933: INFO: stdout: "I0715 00:27:13.820136 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/j9vd 254\n" Jul 15 00:27:13.933: INFO: got output "I0715 00:27:13.820136 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/j9vd 254\n" STEP: limiting log bytes Jul 15 00:27:13.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853 --limit-bytes=1' Jul 15 00:27:14.039: INFO: stderr: "" Jul 15 00:27:14.039: INFO: stdout: "I" Jul 15 00:27:14.039: INFO: got output "I" STEP: exposing timestamps Jul 15 00:27:14.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853 --tail=1 --timestamps' Jul 15 00:27:14.137: INFO: stderr: "" Jul 15 00:27:14.137: INFO: stdout: "2020-07-15T00:27:14.022840004Z I0715 00:27:14.020197 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5tl 260\n" Jul 15 00:27:14.137: INFO: got output "2020-07-15T00:27:14.022840004Z I0715 00:27:14.020197 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5tl 260\n" STEP: restricting to a time range Jul 15 00:27:16.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853 --since=1s' Jul 15 00:27:16.749: INFO: stderr: "" Jul 15 00:27:16.749: INFO: stdout: "I0715 00:27:15.820175 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/2g2t 270\nI0715 00:27:16.020196 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/5z5p 512\nI0715 00:27:16.220148 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/8l7 419\nI0715 00:27:16.420162 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/w6n 397\nI0715 00:27:16.620170 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/zxx 242\n" Jul 15 00:27:16.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4853 --since=24h' Jul 15 00:27:16.876: INFO: stderr: "" Jul 15 00:27:16.876: INFO: stdout: "I0715 00:27:12.220006 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/w9gw 291\nI0715 00:27:12.420191 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/p2h 471\nI0715 00:27:12.620074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/qbmx 326\nI0715 00:27:12.820144 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/pdc 345\nI0715 00:27:13.020183 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/78p 519\nI0715 00:27:13.220180 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/l5d 401\nI0715 00:27:13.420139 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/v4sl 444\nI0715 00:27:13.620185 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/r8n 500\nI0715 00:27:13.820136 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/j9vd 254\nI0715 00:27:14.020197 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5tl 260\nI0715 00:27:14.220049 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/rzqv 246\nI0715 00:27:14.420140 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/9jn5 364\nI0715 00:27:14.620139 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/t6h2 350\nI0715 00:27:14.820113 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/6gv 365\nI0715 00:27:15.020135 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/c74 413\nI0715 00:27:15.220227 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/tl58 241\nI0715 00:27:15.420177 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/gvx 348\nI0715 00:27:15.620179 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/94z 213\nI0715 00:27:15.820175 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/2g2t 270\nI0715 00:27:16.020196 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/5z5p 512\nI0715 00:27:16.220148 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/8l7 419\nI0715 00:27:16.420162 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/w6n 397\nI0715 00:27:16.620170 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/zxx 242\nI0715 00:27:16.820117 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/2p8 279\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Jul 15 00:27:16.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4853' Jul 15 00:27:19.539: INFO: stderr: "" Jul 15 00:27:19.539: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:27:19.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4853" for this suite. • [SLOW TEST:10.138 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1406 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":294,"completed":204,"skipped":3258,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:27:19.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:27:19.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7994" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":294,"completed":205,"skipped":3267,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:27:19.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:27:19.873: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:27:21.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:27:23.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:25.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:27.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:29.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:31.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:33.877: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:35.877: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:37.877: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:39.878: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:41.877: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = false) Jul 15 00:27:43.877: INFO: The status of Pod test-webserver-e88a66e3-8b64-415c-bd36-aa96351a3d46 is Running (Ready = true) Jul 15 00:27:43.880: INFO: Container started at 2020-07-15 00:27:22 +0000 UTC, pod became ready at 2020-07-15 00:27:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:27:43.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9085" for this suite. • [SLOW TEST:24.123 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":294,"completed":206,"skipped":3271,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:27:43.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-551 STEP: creating service affinity-nodeport in namespace services-551 STEP: creating replication controller affinity-nodeport in namespace services-551 I0715 00:27:44.095771 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-551, replica count: 3 I0715 00:27:47.146171 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:27:50.146352 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:27:50.156: INFO: Creating new exec pod Jul 15 00:27:55.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-551 execpod-affinityndvhl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jul 15 00:27:55.434: INFO: stderr: "I0715 00:27:55.336607 2063 log.go:181] (0xc00003b600) (0xc000d026e0) Create stream\nI0715 00:27:55.336663 2063 log.go:181] (0xc00003b600) (0xc000d026e0) Stream added, broadcasting: 1\nI0715 00:27:55.338628 2063 log.go:181] (0xc00003b600) Reply frame received for 1\nI0715 00:27:55.338674 2063 log.go:181] (0xc00003b600) (0xc000b1e140) Create stream\nI0715 00:27:55.338688 2063 log.go:181] (0xc00003b600) (0xc000b1e140) Stream added, broadcasting: 3\nI0715 00:27:55.339641 2063 log.go:181] (0xc00003b600) Reply frame received for 3\nI0715 00:27:55.339685 2063 log.go:181] (0xc00003b600) (0xc000b18140) Create stream\nI0715 00:27:55.339696 2063 log.go:181] (0xc00003b600) (0xc000b18140) Stream added, broadcasting: 5\nI0715 00:27:55.340633 2063 log.go:181] (0xc00003b600) Reply frame received for 5\nI0715 00:27:55.427750 2063 log.go:181] (0xc00003b600) Data frame received for 5\nI0715 00:27:55.427782 2063 log.go:181] (0xc000b18140) (5) Data frame handling\nI0715 00:27:55.427802 2063 log.go:181] (0xc000b18140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0715 00:27:55.428201 2063 log.go:181] (0xc00003b600) Data frame received for 5\nI0715 00:27:55.428232 2063 log.go:181] (0xc000b18140) (5) Data frame handling\nI0715 00:27:55.428248 2063 log.go:181] (0xc000b18140) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0715 00:27:55.428341 2063 log.go:181] (0xc00003b600) Data frame received for 3\nI0715 00:27:55.428363 2063 log.go:181] (0xc000b1e140) (3) Data frame handling\nI0715 00:27:55.428706 2063 log.go:181] (0xc00003b600) Data frame received for 5\nI0715 00:27:55.428811 2063 log.go:181] (0xc000b18140) (5) Data frame handling\nI0715 00:27:55.430265 2063 log.go:181] (0xc00003b600) Data frame received for 1\nI0715 00:27:55.430289 2063 log.go:181] (0xc000d026e0) (1) Data frame handling\nI0715 00:27:55.430302 2063 log.go:181] (0xc000d026e0) (1) Data frame sent\nI0715 00:27:55.430314 2063 log.go:181] (0xc00003b600) (0xc000d026e0) Stream removed, broadcasting: 1\nI0715 00:27:55.430329 2063 log.go:181] (0xc00003b600) Go away received\nI0715 00:27:55.430654 2063 log.go:181] (0xc00003b600) (0xc000d026e0) Stream removed, broadcasting: 1\nI0715 00:27:55.430671 2063 log.go:181] (0xc00003b600) (0xc000b1e140) Stream removed, broadcasting: 3\nI0715 00:27:55.430678 2063 log.go:181] (0xc00003b600) (0xc000b18140) Stream removed, broadcasting: 5\n" Jul 15 00:27:55.435: INFO: stdout: "" Jul 15 00:27:55.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-551 execpod-affinityndvhl -- /bin/sh -x -c nc -zv -t -w 2 10.104.209.25 80' Jul 15 00:27:55.671: INFO: stderr: "I0715 00:27:55.584363 2082 log.go:181] (0xc000d1b340) (0xc0002b9f40) Create stream\nI0715 00:27:55.584422 2082 log.go:181] (0xc000d1b340) (0xc0002b9f40) Stream added, broadcasting: 1\nI0715 00:27:55.588919 2082 log.go:181] (0xc000d1b340) Reply frame received for 1\nI0715 00:27:55.589004 2082 log.go:181] (0xc000d1b340) (0xc00059e640) Create stream\nI0715 00:27:55.589043 2082 log.go:181] (0xc000d1b340) (0xc00059e640) Stream added, broadcasting: 3\nI0715 00:27:55.591130 2082 log.go:181] (0xc000d1b340) Reply frame received for 3\nI0715 00:27:55.591209 2082 log.go:181] (0xc000d1b340) (0xc000730500) Create stream\nI0715 00:27:55.591253 2082 log.go:181] (0xc000d1b340) (0xc000730500) Stream added, broadcasting: 5\nI0715 00:27:55.592434 2082 log.go:181] (0xc000d1b340) Reply frame received for 5\nI0715 00:27:55.666698 2082 log.go:181] (0xc000d1b340) Data frame received for 3\nI0715 00:27:55.666729 2082 log.go:181] (0xc00059e640) (3) Data frame handling\nI0715 00:27:55.666746 2082 log.go:181] (0xc000d1b340) Data frame received for 5\nI0715 00:27:55.666753 2082 log.go:181] (0xc000730500) (5) Data frame handling\nI0715 00:27:55.666764 2082 log.go:181] (0xc000730500) (5) Data frame sent\nI0715 00:27:55.666773 2082 log.go:181] (0xc000d1b340) Data frame received for 5\nI0715 00:27:55.666780 2082 log.go:181] (0xc000730500) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.209.25 80\nConnection to 10.104.209.25 80 port [tcp/http] succeeded!\nI0715 00:27:55.667839 2082 log.go:181] (0xc000d1b340) Data frame received for 1\nI0715 00:27:55.667856 2082 log.go:181] (0xc0002b9f40) (1) Data frame handling\nI0715 00:27:55.667863 2082 log.go:181] (0xc0002b9f40) (1) Data frame sent\nI0715 00:27:55.667872 2082 log.go:181] (0xc000d1b340) (0xc0002b9f40) Stream removed, broadcasting: 1\nI0715 00:27:55.667884 2082 log.go:181] (0xc000d1b340) Go away received\nI0715 00:27:55.668185 2082 log.go:181] (0xc000d1b340) (0xc0002b9f40) Stream removed, broadcasting: 1\nI0715 00:27:55.668207 2082 log.go:181] (0xc000d1b340) (0xc00059e640) Stream removed, broadcasting: 3\nI0715 00:27:55.668218 2082 log.go:181] (0xc000d1b340) (0xc000730500) Stream removed, broadcasting: 5\n" Jul 15 00:27:55.672: INFO: stdout: "" Jul 15 00:27:55.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-551 execpod-affinityndvhl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31160' Jul 15 00:27:55.869: INFO: stderr: "I0715 00:27:55.800321 2100 log.go:181] (0xc0005defd0) (0xc000a15720) Create stream\nI0715 00:27:55.800372 2100 log.go:181] (0xc0005defd0) (0xc000a15720) Stream added, broadcasting: 1\nI0715 00:27:55.805460 2100 log.go:181] (0xc0005defd0) Reply frame received for 1\nI0715 00:27:55.805505 2100 log.go:181] (0xc0005defd0) (0xc00090b0e0) Create stream\nI0715 00:27:55.805519 2100 log.go:181] (0xc0005defd0) (0xc00090b0e0) Stream added, broadcasting: 3\nI0715 00:27:55.806541 2100 log.go:181] (0xc0005defd0) Reply frame received for 3\nI0715 00:27:55.806601 2100 log.go:181] (0xc0005defd0) (0xc0008960a0) Create stream\nI0715 00:27:55.806622 2100 log.go:181] (0xc0005defd0) (0xc0008960a0) Stream added, broadcasting: 5\nI0715 00:27:55.807553 2100 log.go:181] (0xc0005defd0) Reply frame received for 5\nI0715 00:27:55.862016 2100 log.go:181] (0xc0005defd0) Data frame received for 5\nI0715 00:27:55.862048 2100 log.go:181] (0xc0008960a0) (5) Data frame handling\nI0715 00:27:55.862071 2100 log.go:181] (0xc0008960a0) (5) Data frame sent\nI0715 00:27:55.862079 2100 log.go:181] (0xc0005defd0) Data frame received for 5\nI0715 00:27:55.862086 2100 log.go:181] (0xc0008960a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31160\nConnection to 172.18.0.14 31160 port [tcp/31160] succeeded!\nI0715 00:27:55.862107 2100 log.go:181] (0xc0008960a0) (5) Data frame sent\nI0715 00:27:55.862423 2100 log.go:181] (0xc0005defd0) Data frame received for 3\nI0715 00:27:55.862452 2100 log.go:181] (0xc00090b0e0) (3) Data frame handling\nI0715 00:27:55.862614 2100 log.go:181] (0xc0005defd0) Data frame received for 5\nI0715 00:27:55.862624 2100 log.go:181] (0xc0008960a0) (5) Data frame handling\nI0715 00:27:55.863754 2100 log.go:181] (0xc0005defd0) Data frame received for 1\nI0715 00:27:55.863787 2100 log.go:181] (0xc000a15720) (1) Data frame handling\nI0715 00:27:55.863802 2100 log.go:181] (0xc000a15720) (1) Data frame sent\nI0715 00:27:55.863945 2100 log.go:181] (0xc0005defd0) (0xc000a15720) Stream removed, broadcasting: 1\nI0715 00:27:55.863976 2100 log.go:181] (0xc0005defd0) Go away received\nI0715 00:27:55.864429 2100 log.go:181] (0xc0005defd0) (0xc000a15720) Stream removed, broadcasting: 1\nI0715 00:27:55.864452 2100 log.go:181] (0xc0005defd0) (0xc00090b0e0) Stream removed, broadcasting: 3\nI0715 00:27:55.864465 2100 log.go:181] (0xc0005defd0) (0xc0008960a0) Stream removed, broadcasting: 5\n" Jul 15 00:27:55.869: INFO: stdout: "" Jul 15 00:27:55.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-551 execpod-affinityndvhl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31160' Jul 15 00:27:56.060: INFO: stderr: "I0715 00:27:55.995307 2119 log.go:181] (0xc000130f20) (0xc000a445a0) Create stream\nI0715 00:27:55.995359 2119 log.go:181] (0xc000130f20) (0xc000a445a0) Stream added, broadcasting: 1\nI0715 00:27:55.999961 2119 log.go:181] (0xc000130f20) Reply frame received for 1\nI0715 00:27:55.999991 2119 log.go:181] (0xc000130f20) (0xc000b9f0e0) Create stream\nI0715 00:27:56.000015 2119 log.go:181] (0xc000130f20) (0xc000b9f0e0) Stream added, broadcasting: 3\nI0715 00:27:56.001058 2119 log.go:181] (0xc000130f20) Reply frame received for 3\nI0715 00:27:56.001084 2119 log.go:181] (0xc000130f20) (0xc000b643c0) Create stream\nI0715 00:27:56.001092 2119 log.go:181] (0xc000130f20) (0xc000b643c0) Stream added, broadcasting: 5\nI0715 00:27:56.002020 2119 log.go:181] (0xc000130f20) Reply frame received for 5\nI0715 00:27:56.052622 2119 log.go:181] (0xc000130f20) Data frame received for 5\nI0715 00:27:56.052651 2119 log.go:181] (0xc000b643c0) (5) Data frame handling\nI0715 00:27:56.052670 2119 log.go:181] (0xc000b643c0) (5) Data frame sent\nI0715 00:27:56.052679 2119 log.go:181] (0xc000130f20) Data frame received for 5\nI0715 00:27:56.052686 2119 log.go:181] (0xc000b643c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31160\nConnection to 172.18.0.11 31160 port [tcp/31160] succeeded!\nI0715 00:27:56.052707 2119 log.go:181] (0xc000b643c0) (5) Data frame sent\nI0715 00:27:56.053084 2119 log.go:181] (0xc000130f20) Data frame received for 3\nI0715 00:27:56.053118 2119 log.go:181] (0xc000b9f0e0) (3) Data frame handling\nI0715 00:27:56.053246 2119 log.go:181] (0xc000130f20) Data frame received for 5\nI0715 00:27:56.053271 2119 log.go:181] (0xc000b643c0) (5) Data frame handling\nI0715 00:27:56.054859 2119 log.go:181] (0xc000130f20) Data frame received for 1\nI0715 00:27:56.054888 2119 log.go:181] (0xc000a445a0) (1) Data frame handling\nI0715 00:27:56.054920 2119 log.go:181] (0xc000a445a0) (1) Data frame sent\nI0715 00:27:56.054943 2119 log.go:181] (0xc000130f20) (0xc000a445a0) Stream removed, broadcasting: 1\nI0715 00:27:56.054972 2119 log.go:181] (0xc000130f20) Go away received\nI0715 00:27:56.055631 2119 log.go:181] (0xc000130f20) (0xc000a445a0) Stream removed, broadcasting: 1\nI0715 00:27:56.055656 2119 log.go:181] (0xc000130f20) (0xc000b9f0e0) Stream removed, broadcasting: 3\nI0715 00:27:56.055669 2119 log.go:181] (0xc000130f20) (0xc000b643c0) Stream removed, broadcasting: 5\n" Jul 15 00:27:56.060: INFO: stdout: "" Jul 15 00:27:56.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-551 execpod-affinityndvhl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31160/ ; done' Jul 15 00:27:56.381: INFO: stderr: "I0715 00:27:56.193912 2137 log.go:181] (0xc0006cefd0) (0xc000d11360) Create stream\nI0715 00:27:56.193961 2137 log.go:181] (0xc0006cefd0) (0xc000d11360) Stream added, broadcasting: 1\nI0715 00:27:56.196571 2137 log.go:181] (0xc0006cefd0) Reply frame received for 1\nI0715 00:27:56.196604 2137 log.go:181] (0xc0006cefd0) (0xc000f1a460) Create stream\nI0715 00:27:56.196621 2137 log.go:181] (0xc0006cefd0) (0xc000f1a460) Stream added, broadcasting: 3\nI0715 00:27:56.197918 2137 log.go:181] (0xc0006cefd0) Reply frame received for 3\nI0715 00:27:56.197969 2137 log.go:181] (0xc0006cefd0) (0xc0009200a0) Create stream\nI0715 00:27:56.198002 2137 log.go:181] (0xc0006cefd0) (0xc0009200a0) Stream added, broadcasting: 5\nI0715 00:27:56.198953 2137 log.go:181] (0xc0006cefd0) Reply frame received for 5\nI0715 00:27:56.272315 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.272366 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.272384 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.272411 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.272427 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.272458 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.280102 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.280126 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.280146 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.280868 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.280892 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.280904 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\nI0715 00:27:56.280920 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.280942 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.280989 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\nI0715 00:27:56.281011 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.281033 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.281058 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.287703 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.287727 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.287758 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.288231 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.288262 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.288279 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.288303 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.288316 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.288341 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.291901 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.291927 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.291953 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.292358 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.292388 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.292419 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.292438 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.292458 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.292470 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.298516 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.298538 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.298557 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.299017 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.299038 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.299061 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.299086 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.299102 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.299122 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.303683 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.303722 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.303762 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.304235 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.304258 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.304287 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.304318 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.304330 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.304374 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.310237 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.310264 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.310285 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.310764 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.310802 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.310818 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.310838 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.310849 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.310867 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.318337 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.318362 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.318381 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.319107 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.319124 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.319149 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.319160 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.319167 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.319175 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\nI0715 00:27:56.319182 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.319188 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.319203 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\nI0715 00:27:56.324624 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.324652 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.324671 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.325430 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.325455 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.325467 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.325486 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.325497 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.325508 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.329635 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.329652 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.329660 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.330627 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.330643 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.330660 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\nI0715 00:27:56.330668 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.330675 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.330682 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.336879 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.336915 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.336943 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.337715 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.337752 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.337771 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.337796 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.337810 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.337831 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.342906 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.342931 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.342953 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.343673 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.343700 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.343749 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.343766 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.343789 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.343813 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.350653 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.350672 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.350686 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.351245 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.351259 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.351268 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.351370 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.351409 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.351438 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.356246 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.356264 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.356273 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.356699 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.356807 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.356836 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.356941 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.356970 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.356999 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.360062 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.360089 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.360111 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.360826 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.360860 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.360878 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.360902 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.360917 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.360936 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.365931 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.365958 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.365981 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.366463 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.366480 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.366497 2137 log.go:181] (0xc0009200a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31160/\nI0715 00:27:56.366712 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.366735 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.366749 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.373335 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.373362 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.373388 2137 log.go:181] (0xc000f1a460) (3) Data frame sent\nI0715 00:27:56.374112 2137 log.go:181] (0xc0006cefd0) Data frame received for 3\nI0715 00:27:56.374138 2137 log.go:181] (0xc000f1a460) (3) Data frame handling\nI0715 00:27:56.374233 2137 log.go:181] (0xc0006cefd0) Data frame received for 5\nI0715 00:27:56.374255 2137 log.go:181] (0xc0009200a0) (5) Data frame handling\nI0715 00:27:56.375919 2137 log.go:181] (0xc0006cefd0) Data frame received for 1\nI0715 00:27:56.375972 2137 log.go:181] (0xc000d11360) (1) Data frame handling\nI0715 00:27:56.376011 2137 log.go:181] (0xc000d11360) (1) Data frame sent\nI0715 00:27:56.376033 2137 log.go:181] (0xc0006cefd0) (0xc000d11360) Stream removed, broadcasting: 1\nI0715 00:27:56.376099 2137 log.go:181] (0xc0006cefd0) Go away received\nI0715 00:27:56.376443 2137 log.go:181] (0xc0006cefd0) (0xc000d11360) Stream removed, broadcasting: 1\nI0715 00:27:56.376457 2137 log.go:181] (0xc0006cefd0) (0xc000f1a460) Stream removed, broadcasting: 3\nI0715 00:27:56.376464 2137 log.go:181] (0xc0006cefd0) (0xc0009200a0) Stream removed, broadcasting: 5\n" Jul 15 00:27:56.382: INFO: stdout: "\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w\naffinity-nodeport-brf7w" Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Received response from host: affinity-nodeport-brf7w Jul 15 00:27:56.382: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-551, will wait for the garbage collector to delete the pods Jul 15 00:27:56.499: INFO: Deleting ReplicationController affinity-nodeport took: 4.839571ms Jul 15 00:27:56.900: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.193595ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:28:09.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-551" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:25.409 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":207,"skipped":3276,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:28:09.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Jul 15 00:28:09.412: INFO: Waiting up to 1m0s for all nodes to be ready Jul 15 00:29:09.431: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jul 15 00:29:09.448: INFO: Created pod: pod0-sched-preemption-low-priority Jul 15 00:29:09.534: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:29:21.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9192" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:72.473 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":294,"completed":208,"skipped":3278,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:29:21.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:29:22.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739" in namespace "projected-5306" to be "Succeeded or Failed" Jul 15 00:29:22.175: INFO: Pod "downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739": Phase="Pending", Reason="", readiness=false. Elapsed: 14.71848ms Jul 15 00:29:24.179: INFO: Pod "downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019043351s Jul 15 00:29:26.184: INFO: Pod "downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023455359s STEP: Saw pod success Jul 15 00:29:26.184: INFO: Pod "downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739" satisfied condition "Succeeded or Failed" Jul 15 00:29:26.187: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739 container client-container: STEP: delete the pod Jul 15 00:29:26.272: INFO: Waiting for pod downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739 to disappear Jul 15 00:29:26.282: INFO: Pod downwardapi-volume-b02c8416-4e1a-4f24-8592-6ce030715739 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:29:26.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5306" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":209,"skipped":3289,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:29:26.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 15 00:29:26.355: INFO: Waiting up to 5m0s for pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1" in namespace "downward-api-3790" to be "Succeeded or Failed" Jul 15 00:29:26.444: INFO: Pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1": Phase="Pending", Reason="", readiness=false. Elapsed: 89.144549ms Jul 15 00:29:28.448: INFO: Pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092905034s Jul 15 00:29:30.452: INFO: Pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097134981s Jul 15 00:29:32.457: INFO: Pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101910279s STEP: Saw pod success Jul 15 00:29:32.457: INFO: Pod "downward-api-544dad33-4089-4490-9dad-8d35a013dce1" satisfied condition "Succeeded or Failed" Jul 15 00:29:32.460: INFO: Trying to get logs from node latest-worker pod downward-api-544dad33-4089-4490-9dad-8d35a013dce1 container dapi-container: STEP: delete the pod Jul 15 00:29:32.495: INFO: Waiting for pod downward-api-544dad33-4089-4490-9dad-8d35a013dce1 to disappear Jul 15 00:29:32.513: INFO: Pod downward-api-544dad33-4089-4490-9dad-8d35a013dce1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:29:32.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3790" for this suite. • [SLOW TEST:6.232 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":294,"completed":210,"skipped":3296,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:29:32.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7881 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-7881 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7881 Jul 15 00:29:32.603: INFO: Found 0 stateful pods, waiting for 1 Jul 15 00:29:42.607: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 15 00:29:42.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:29:42.872: INFO: stderr: "I0715 00:29:42.742432 2155 log.go:181] (0xc000e91080) (0xc0009a3540) Create stream\nI0715 00:29:42.742482 2155 log.go:181] (0xc000e91080) (0xc0009a3540) Stream added, broadcasting: 1\nI0715 00:29:42.750090 2155 log.go:181] (0xc000e91080) Reply frame received for 1\nI0715 00:29:42.750142 2155 log.go:181] (0xc000e91080) (0xc000890640) Create stream\nI0715 00:29:42.750156 2155 log.go:181] (0xc000e91080) (0xc000890640) Stream added, broadcasting: 3\nI0715 00:29:42.751151 2155 log.go:181] (0xc000e91080) Reply frame received for 3\nI0715 00:29:42.751181 2155 log.go:181] (0xc000e91080) (0xc0007168c0) Create stream\nI0715 00:29:42.751191 2155 log.go:181] (0xc000e91080) (0xc0007168c0) Stream added, broadcasting: 5\nI0715 00:29:42.752059 2155 log.go:181] (0xc000e91080) Reply frame received for 5\nI0715 00:29:42.817116 2155 log.go:181] (0xc000e91080) Data frame received for 5\nI0715 00:29:42.817241 2155 log.go:181] (0xc0007168c0) (5) Data frame handling\nI0715 00:29:42.817328 2155 log.go:181] (0xc0007168c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:29:42.864011 2155 log.go:181] (0xc000e91080) Data frame received for 3\nI0715 00:29:42.864043 2155 log.go:181] (0xc000890640) (3) Data frame handling\nI0715 00:29:42.864084 2155 log.go:181] (0xc000890640) (3) Data frame sent\nI0715 00:29:42.864091 2155 log.go:181] (0xc000e91080) Data frame received for 3\nI0715 00:29:42.864096 2155 log.go:181] (0xc000890640) (3) Data frame handling\nI0715 00:29:42.864223 2155 log.go:181] (0xc000e91080) Data frame received for 5\nI0715 00:29:42.864261 2155 log.go:181] (0xc0007168c0) (5) Data frame handling\nI0715 00:29:42.866622 2155 log.go:181] (0xc000e91080) Data frame received for 1\nI0715 00:29:42.866656 2155 log.go:181] (0xc0009a3540) (1) Data frame handling\nI0715 00:29:42.866687 2155 log.go:181] (0xc0009a3540) (1) Data frame sent\nI0715 00:29:42.866714 2155 log.go:181] (0xc000e91080) (0xc0009a3540) Stream removed, broadcasting: 1\nI0715 00:29:42.866744 2155 log.go:181] (0xc000e91080) Go away received\nI0715 00:29:42.866985 2155 log.go:181] (0xc000e91080) (0xc0009a3540) Stream removed, broadcasting: 1\nI0715 00:29:42.866996 2155 log.go:181] (0xc000e91080) (0xc000890640) Stream removed, broadcasting: 3\nI0715 00:29:42.867002 2155 log.go:181] (0xc000e91080) (0xc0007168c0) Stream removed, broadcasting: 5\n" Jul 15 00:29:42.872: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:29:42.872: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:29:42.876: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 15 00:29:52.880: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:29:52.880: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:29:52.917: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:29:52.917: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:29:52.917: INFO: Jul 15 00:29:52.917: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 15 00:29:53.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974466021s Jul 15 00:29:54.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969548759s Jul 15 00:29:55.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964487303s Jul 15 00:29:56.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.947474097s Jul 15 00:29:57.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942367547s Jul 15 00:29:58.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.933361252s Jul 15 00:29:59.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.927820874s Jul 15 00:30:00.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.922973006s Jul 15 00:30:01.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 917.874557ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7881 Jul 15 00:30:02.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:03.191: INFO: stderr: "I0715 00:30:03.124561 2174 log.go:181] (0xc0007bb290) (0xc000f00500) Create stream\nI0715 00:30:03.124608 2174 log.go:181] (0xc0007bb290) (0xc000f00500) Stream added, broadcasting: 1\nI0715 00:30:03.131684 2174 log.go:181] (0xc0007bb290) Reply frame received for 1\nI0715 00:30:03.131737 2174 log.go:181] (0xc0007bb290) (0xc000822be0) Create stream\nI0715 00:30:03.131752 2174 log.go:181] (0xc0007bb290) (0xc000822be0) Stream added, broadcasting: 3\nI0715 00:30:03.132872 2174 log.go:181] (0xc0007bb290) Reply frame received for 3\nI0715 00:30:03.132901 2174 log.go:181] (0xc0007bb290) (0xc00059e000) Create stream\nI0715 00:30:03.132910 2174 log.go:181] (0xc0007bb290) (0xc00059e000) Stream added, broadcasting: 5\nI0715 00:30:03.133900 2174 log.go:181] (0xc0007bb290) Reply frame received for 5\nI0715 00:30:03.182980 2174 log.go:181] (0xc0007bb290) Data frame received for 5\nI0715 00:30:03.183022 2174 log.go:181] (0xc00059e000) (5) Data frame handling\nI0715 00:30:03.183039 2174 log.go:181] (0xc00059e000) (5) Data frame sent\nI0715 00:30:03.183051 2174 log.go:181] (0xc0007bb290) Data frame received for 5\nI0715 00:30:03.183063 2174 log.go:181] (0xc00059e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:30:03.183099 2174 log.go:181] (0xc0007bb290) Data frame received for 3\nI0715 00:30:03.183135 2174 log.go:181] (0xc000822be0) (3) Data frame handling\nI0715 00:30:03.183171 2174 log.go:181] (0xc000822be0) (3) Data frame sent\nI0715 00:30:03.183200 2174 log.go:181] (0xc0007bb290) Data frame received for 3\nI0715 00:30:03.183216 2174 log.go:181] (0xc000822be0) (3) Data frame handling\nI0715 00:30:03.185097 2174 log.go:181] (0xc0007bb290) Data frame received for 1\nI0715 00:30:03.185127 2174 log.go:181] (0xc000f00500) (1) Data frame handling\nI0715 00:30:03.185157 2174 log.go:181] (0xc000f00500) (1) Data frame sent\nI0715 00:30:03.185178 2174 log.go:181] (0xc0007bb290) (0xc000f00500) Stream removed, broadcasting: 1\nI0715 00:30:03.185237 2174 log.go:181] (0xc0007bb290) Go away received\nI0715 00:30:03.185707 2174 log.go:181] (0xc0007bb290) (0xc000f00500) Stream removed, broadcasting: 1\nI0715 00:30:03.185739 2174 log.go:181] (0xc0007bb290) (0xc000822be0) Stream removed, broadcasting: 3\nI0715 00:30:03.185757 2174 log.go:181] (0xc0007bb290) (0xc00059e000) Stream removed, broadcasting: 5\n" Jul 15 00:30:03.191: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:30:03.191: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:30:03.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:03.435: INFO: stderr: "I0715 00:30:03.353926 2192 log.go:181] (0xc0005d6e70) (0xc000719e00) Create stream\nI0715 00:30:03.353996 2192 log.go:181] (0xc0005d6e70) (0xc000719e00) Stream added, broadcasting: 1\nI0715 00:30:03.358782 2192 log.go:181] (0xc0005d6e70) Reply frame received for 1\nI0715 00:30:03.358822 2192 log.go:181] (0xc0005d6e70) (0xc0006b8460) Create stream\nI0715 00:30:03.358833 2192 log.go:181] (0xc0005d6e70) (0xc0006b8460) Stream added, broadcasting: 3\nI0715 00:30:03.359915 2192 log.go:181] (0xc0005d6e70) Reply frame received for 3\nI0715 00:30:03.359947 2192 log.go:181] (0xc0005d6e70) (0xc00069e0a0) Create stream\nI0715 00:30:03.359958 2192 log.go:181] (0xc0005d6e70) (0xc00069e0a0) Stream added, broadcasting: 5\nI0715 00:30:03.361091 2192 log.go:181] (0xc0005d6e70) Reply frame received for 5\nI0715 00:30:03.416332 2192 log.go:181] (0xc0005d6e70) Data frame received for 5\nI0715 00:30:03.416362 2192 log.go:181] (0xc00069e0a0) (5) Data frame handling\nI0715 00:30:03.416381 2192 log.go:181] (0xc00069e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:30:03.427854 2192 log.go:181] (0xc0005d6e70) Data frame received for 3\nI0715 00:30:03.427891 2192 log.go:181] (0xc0006b8460) (3) Data frame handling\nI0715 00:30:03.427910 2192 log.go:181] (0xc0006b8460) (3) Data frame sent\nI0715 00:30:03.427925 2192 log.go:181] (0xc0005d6e70) Data frame received for 3\nI0715 00:30:03.427938 2192 log.go:181] (0xc0006b8460) (3) Data frame handling\nI0715 00:30:03.427972 2192 log.go:181] (0xc0005d6e70) Data frame received for 5\nI0715 00:30:03.427988 2192 log.go:181] (0xc00069e0a0) (5) Data frame handling\nI0715 00:30:03.428006 2192 log.go:181] (0xc00069e0a0) (5) Data frame sent\nI0715 00:30:03.428029 2192 log.go:181] (0xc0005d6e70) Data frame received for 5\nI0715 00:30:03.428041 2192 log.go:181] (0xc00069e0a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0715 00:30:03.428065 2192 log.go:181] (0xc00069e0a0) (5) Data frame sent\nI0715 00:30:03.428085 2192 log.go:181] (0xc0005d6e70) Data frame received for 5\nI0715 00:30:03.428098 2192 log.go:181] (0xc00069e0a0) (5) Data frame handling\nI0715 00:30:03.429997 2192 log.go:181] (0xc0005d6e70) Data frame received for 1\nI0715 00:30:03.430013 2192 log.go:181] (0xc000719e00) (1) Data frame handling\nI0715 00:30:03.430022 2192 log.go:181] (0xc000719e00) (1) Data frame sent\nI0715 00:30:03.430116 2192 log.go:181] (0xc0005d6e70) (0xc000719e00) Stream removed, broadcasting: 1\nI0715 00:30:03.430142 2192 log.go:181] (0xc0005d6e70) Go away received\nI0715 00:30:03.430573 2192 log.go:181] (0xc0005d6e70) (0xc000719e00) Stream removed, broadcasting: 1\nI0715 00:30:03.430600 2192 log.go:181] (0xc0005d6e70) (0xc0006b8460) Stream removed, broadcasting: 3\nI0715 00:30:03.430621 2192 log.go:181] (0xc0005d6e70) (0xc00069e0a0) Stream removed, broadcasting: 5\n" Jul 15 00:30:03.435: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:30:03.435: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:30:03.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:03.650: INFO: stderr: "I0715 00:30:03.559951 2210 log.go:181] (0xc000fa0fd0) (0xc000f18780) Create stream\nI0715 00:30:03.559989 2210 log.go:181] (0xc000fa0fd0) (0xc000f18780) Stream added, broadcasting: 1\nI0715 00:30:03.564222 2210 log.go:181] (0xc000fa0fd0) Reply frame received for 1\nI0715 00:30:03.564250 2210 log.go:181] (0xc000fa0fd0) (0xc000ad5220) Create stream\nI0715 00:30:03.564258 2210 log.go:181] (0xc000fa0fd0) (0xc000ad5220) Stream added, broadcasting: 3\nI0715 00:30:03.565175 2210 log.go:181] (0xc000fa0fd0) Reply frame received for 3\nI0715 00:30:03.565203 2210 log.go:181] (0xc000fa0fd0) (0xc0004de280) Create stream\nI0715 00:30:03.565211 2210 log.go:181] (0xc000fa0fd0) (0xc0004de280) Stream added, broadcasting: 5\nI0715 00:30:03.566013 2210 log.go:181] (0xc000fa0fd0) Reply frame received for 5\nI0715 00:30:03.643736 2210 log.go:181] (0xc000fa0fd0) Data frame received for 3\nI0715 00:30:03.643778 2210 log.go:181] (0xc000ad5220) (3) Data frame handling\nI0715 00:30:03.643793 2210 log.go:181] (0xc000ad5220) (3) Data frame sent\nI0715 00:30:03.643803 2210 log.go:181] (0xc000fa0fd0) Data frame received for 3\nI0715 00:30:03.643823 2210 log.go:181] (0xc000fa0fd0) Data frame received for 5\nI0715 00:30:03.643849 2210 log.go:181] (0xc0004de280) (5) Data frame handling\nI0715 00:30:03.643870 2210 log.go:181] (0xc0004de280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0715 00:30:03.643913 2210 log.go:181] (0xc000ad5220) (3) Data frame handling\nI0715 00:30:03.643959 2210 log.go:181] (0xc000fa0fd0) Data frame received for 5\nI0715 00:30:03.643983 2210 log.go:181] (0xc0004de280) (5) Data frame handling\nI0715 00:30:03.645830 2210 log.go:181] (0xc000fa0fd0) Data frame received for 1\nI0715 00:30:03.645864 2210 log.go:181] (0xc000f18780) (1) Data frame handling\nI0715 00:30:03.645901 2210 log.go:181] (0xc000f18780) (1) Data frame sent\nI0715 00:30:03.645937 2210 log.go:181] (0xc000fa0fd0) (0xc000f18780) Stream removed, broadcasting: 1\nI0715 00:30:03.645965 2210 log.go:181] (0xc000fa0fd0) Go away received\nI0715 00:30:03.646369 2210 log.go:181] (0xc000fa0fd0) (0xc000f18780) Stream removed, broadcasting: 1\nI0715 00:30:03.646392 2210 log.go:181] (0xc000fa0fd0) (0xc000ad5220) Stream removed, broadcasting: 3\nI0715 00:30:03.646403 2210 log.go:181] (0xc000fa0fd0) (0xc0004de280) Stream removed, broadcasting: 5\n" Jul 15 00:30:03.650: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:30:03.650: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:30:03.653: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:30:03.653: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:30:03.653: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 15 00:30:03.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:30:03.870: INFO: stderr: "I0715 00:30:03.801735 2228 log.go:181] (0xc000664fd0) (0xc000d9a460) Create stream\nI0715 00:30:03.801797 2228 log.go:181] (0xc000664fd0) (0xc000d9a460) Stream added, broadcasting: 1\nI0715 00:30:03.806276 2228 log.go:181] (0xc000664fd0) Reply frame received for 1\nI0715 00:30:03.806325 2228 log.go:181] (0xc000664fd0) (0xc0009eb220) Create stream\nI0715 00:30:03.806335 2228 log.go:181] (0xc000664fd0) (0xc0009eb220) Stream added, broadcasting: 3\nI0715 00:30:03.807420 2228 log.go:181] (0xc000664fd0) Reply frame received for 3\nI0715 00:30:03.807455 2228 log.go:181] (0xc000664fd0) (0xc0008f8320) Create stream\nI0715 00:30:03.807466 2228 log.go:181] (0xc000664fd0) (0xc0008f8320) Stream added, broadcasting: 5\nI0715 00:30:03.808246 2228 log.go:181] (0xc000664fd0) Reply frame received for 5\nI0715 00:30:03.862745 2228 log.go:181] (0xc000664fd0) Data frame received for 3\nI0715 00:30:03.862779 2228 log.go:181] (0xc0009eb220) (3) Data frame handling\nI0715 00:30:03.862792 2228 log.go:181] (0xc0009eb220) (3) Data frame sent\nI0715 00:30:03.862802 2228 log.go:181] (0xc000664fd0) Data frame received for 3\nI0715 00:30:03.862811 2228 log.go:181] (0xc0009eb220) (3) Data frame handling\nI0715 00:30:03.862824 2228 log.go:181] (0xc000664fd0) Data frame received for 5\nI0715 00:30:03.862833 2228 log.go:181] (0xc0008f8320) (5) Data frame handling\nI0715 00:30:03.862842 2228 log.go:181] (0xc0008f8320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:30:03.862852 2228 log.go:181] (0xc000664fd0) Data frame received for 5\nI0715 00:30:03.862881 2228 log.go:181] (0xc0008f8320) (5) Data frame handling\nI0715 00:30:03.864625 2228 log.go:181] (0xc000664fd0) Data frame received for 1\nI0715 00:30:03.864652 2228 log.go:181] (0xc000d9a460) (1) Data frame handling\nI0715 00:30:03.864672 2228 log.go:181] (0xc000d9a460) (1) Data frame sent\nI0715 00:30:03.864686 2228 log.go:181] (0xc000664fd0) (0xc000d9a460) Stream removed, broadcasting: 1\nI0715 00:30:03.864713 2228 log.go:181] (0xc000664fd0) Go away received\nI0715 00:30:03.865218 2228 log.go:181] (0xc000664fd0) (0xc000d9a460) Stream removed, broadcasting: 1\nI0715 00:30:03.865243 2228 log.go:181] (0xc000664fd0) (0xc0009eb220) Stream removed, broadcasting: 3\nI0715 00:30:03.865256 2228 log.go:181] (0xc000664fd0) (0xc0008f8320) Stream removed, broadcasting: 5\n" Jul 15 00:30:03.870: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:30:03.870: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:30:03.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:30:04.123: INFO: stderr: "I0715 00:30:04.009363 2246 log.go:181] (0xc00093f340) (0xc000c019a0) Create stream\nI0715 00:30:04.009445 2246 log.go:181] (0xc00093f340) (0xc000c019a0) Stream added, broadcasting: 1\nI0715 00:30:04.015304 2246 log.go:181] (0xc00093f340) Reply frame received for 1\nI0715 00:30:04.015340 2246 log.go:181] (0xc00093f340) (0xc0005aabe0) Create stream\nI0715 00:30:04.015349 2246 log.go:181] (0xc00093f340) (0xc0005aabe0) Stream added, broadcasting: 3\nI0715 00:30:04.016369 2246 log.go:181] (0xc00093f340) Reply frame received for 3\nI0715 00:30:04.016424 2246 log.go:181] (0xc00093f340) (0xc0005abea0) Create stream\nI0715 00:30:04.016452 2246 log.go:181] (0xc00093f340) (0xc0005abea0) Stream added, broadcasting: 5\nI0715 00:30:04.017884 2246 log.go:181] (0xc00093f340) Reply frame received for 5\nI0715 00:30:04.074246 2246 log.go:181] (0xc00093f340) Data frame received for 5\nI0715 00:30:04.074291 2246 log.go:181] (0xc0005abea0) (5) Data frame handling\nI0715 00:30:04.074332 2246 log.go:181] (0xc0005abea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:30:04.115285 2246 log.go:181] (0xc00093f340) Data frame received for 3\nI0715 00:30:04.115314 2246 log.go:181] (0xc0005aabe0) (3) Data frame handling\nI0715 00:30:04.115327 2246 log.go:181] (0xc0005aabe0) (3) Data frame sent\nI0715 00:30:04.115445 2246 log.go:181] (0xc00093f340) Data frame received for 5\nI0715 00:30:04.115490 2246 log.go:181] (0xc0005abea0) (5) Data frame handling\nI0715 00:30:04.115659 2246 log.go:181] (0xc00093f340) Data frame received for 3\nI0715 00:30:04.115688 2246 log.go:181] (0xc0005aabe0) (3) Data frame handling\nI0715 00:30:04.118274 2246 log.go:181] (0xc00093f340) Data frame received for 1\nI0715 00:30:04.118356 2246 log.go:181] (0xc000c019a0) (1) Data frame handling\nI0715 00:30:04.118386 2246 log.go:181] (0xc000c019a0) (1) Data frame sent\nI0715 00:30:04.118409 2246 log.go:181] (0xc00093f340) (0xc000c019a0) Stream removed, broadcasting: 1\nI0715 00:30:04.118438 2246 log.go:181] (0xc00093f340) Go away received\nI0715 00:30:04.118804 2246 log.go:181] (0xc00093f340) (0xc000c019a0) Stream removed, broadcasting: 1\nI0715 00:30:04.118831 2246 log.go:181] (0xc00093f340) (0xc0005aabe0) Stream removed, broadcasting: 3\nI0715 00:30:04.118848 2246 log.go:181] (0xc00093f340) (0xc0005abea0) Stream removed, broadcasting: 5\n" Jul 15 00:30:04.123: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:30:04.123: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:30:04.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:30:04.346: INFO: stderr: "I0715 00:30:04.247077 2264 log.go:181] (0xc000571c30) (0xc000f80780) Create stream\nI0715 00:30:04.247129 2264 log.go:181] (0xc000571c30) (0xc000f80780) Stream added, broadcasting: 1\nI0715 00:30:04.249610 2264 log.go:181] (0xc000571c30) Reply frame received for 1\nI0715 00:30:04.249645 2264 log.go:181] (0xc000571c30) (0xc0007ca0a0) Create stream\nI0715 00:30:04.249663 2264 log.go:181] (0xc000571c30) (0xc0007ca0a0) Stream added, broadcasting: 3\nI0715 00:30:04.251554 2264 log.go:181] (0xc000571c30) Reply frame received for 3\nI0715 00:30:04.251578 2264 log.go:181] (0xc000571c30) (0xc000ca8500) Create stream\nI0715 00:30:04.251587 2264 log.go:181] (0xc000571c30) (0xc000ca8500) Stream added, broadcasting: 5\nI0715 00:30:04.252437 2264 log.go:181] (0xc000571c30) Reply frame received for 5\nI0715 00:30:04.303315 2264 log.go:181] (0xc000571c30) Data frame received for 5\nI0715 00:30:04.303351 2264 log.go:181] (0xc000ca8500) (5) Data frame handling\nI0715 00:30:04.303373 2264 log.go:181] (0xc000ca8500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:30:04.338043 2264 log.go:181] (0xc000571c30) Data frame received for 3\nI0715 00:30:04.338087 2264 log.go:181] (0xc0007ca0a0) (3) Data frame handling\nI0715 00:30:04.338116 2264 log.go:181] (0xc0007ca0a0) (3) Data frame sent\nI0715 00:30:04.338130 2264 log.go:181] (0xc000571c30) Data frame received for 3\nI0715 00:30:04.338140 2264 log.go:181] (0xc0007ca0a0) (3) Data frame handling\nI0715 00:30:04.338572 2264 log.go:181] (0xc000571c30) Data frame received for 5\nI0715 00:30:04.338594 2264 log.go:181] (0xc000ca8500) (5) Data frame handling\nI0715 00:30:04.340150 2264 log.go:181] (0xc000571c30) Data frame received for 1\nI0715 00:30:04.340167 2264 log.go:181] (0xc000f80780) (1) Data frame handling\nI0715 00:30:04.340187 2264 log.go:181] (0xc000f80780) (1) Data frame sent\nI0715 00:30:04.340205 2264 log.go:181] (0xc000571c30) (0xc000f80780) Stream removed, broadcasting: 1\nI0715 00:30:04.340284 2264 log.go:181] (0xc000571c30) Go away received\nI0715 00:30:04.340509 2264 log.go:181] (0xc000571c30) (0xc000f80780) Stream removed, broadcasting: 1\nI0715 00:30:04.340525 2264 log.go:181] (0xc000571c30) (0xc0007ca0a0) Stream removed, broadcasting: 3\nI0715 00:30:04.340533 2264 log.go:181] (0xc000571c30) (0xc000ca8500) Stream removed, broadcasting: 5\n" Jul 15 00:30:04.346: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:30:04.346: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:30:04.346: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:30:04.403: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 15 00:30:14.461: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:30:14.461: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:30:14.461: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:30:14.493: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:14.493: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:14.493: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:14.493: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:14.493: INFO: Jul 15 00:30:14.493: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 00:30:15.524: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:15.524: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:15.524: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:15.524: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:15.524: INFO: Jul 15 00:30:15.524: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 00:30:16.606: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:16.606: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:16.606: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:16.606: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:16.606: INFO: Jul 15 00:30:16.606: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 00:30:17.610: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:17.610: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:17.610: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:17.610: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:17.610: INFO: Jul 15 00:30:17.610: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 15 00:30:18.615: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:18.615: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:18.616: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:18.616: INFO: Jul 15 00:30:18.616: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 00:30:19.619: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:19.620: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:19.620: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:19.620: INFO: Jul 15 00:30:19.620: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 00:30:20.624: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:20.624: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:20.624: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:20.624: INFO: Jul 15 00:30:20.624: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 00:30:21.649: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:21.649: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:21.649: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:21.649: INFO: Jul 15 00:30:21.649: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 00:30:22.654: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:22.654: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:22.654: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:22.654: INFO: Jul 15 00:30:22.654: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 15 00:30:23.660: INFO: POD NODE PHASE GRACE CONDITIONS Jul 15 00:30:23.660: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:32 +0000 UTC }] Jul 15 00:30:23.660: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:30:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-15 00:29:52 +0000 UTC }] Jul 15 00:30:23.660: INFO: Jul 15 00:30:23.660: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7881 Jul 15 00:30:24.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:24.798: INFO: rc: 1 Jul 15 00:30:24.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jul 15 00:30:34.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:34.903: INFO: rc: 1 Jul 15 00:30:34.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:30:44.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:44.998: INFO: rc: 1 Jul 15 00:30:44.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:30:54.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:30:56.051: INFO: rc: 1 Jul 15 00:30:56.051: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:06.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:06.149: INFO: rc: 1 Jul 15 00:31:06.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:16.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:16.261: INFO: rc: 1 Jul 15 00:31:16.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:26.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:26.374: INFO: rc: 1 Jul 15 00:31:26.374: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:36.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:36.473: INFO: rc: 1 Jul 15 00:31:36.473: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:46.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:46.575: INFO: rc: 1 Jul 15 00:31:46.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:31:56.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:31:56.690: INFO: rc: 1 Jul 15 00:31:56.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:32:06.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:32:06.798: INFO: rc: 1 Jul 15 00:32:06.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:32:16.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:32:16.906: INFO: rc: 1 Jul 15 00:32:16.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:32:26.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:32:27.000: INFO: rc: 1 Jul 15 00:32:27.000: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:32:37.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:32:40.137: INFO: rc: 1 Jul 15 00:32:40.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:32:50.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:32:50.237: INFO: rc: 1 Jul 15 00:32:50.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:00.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:00.336: INFO: rc: 1 Jul 15 00:33:00.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:10.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:10.437: INFO: rc: 1 Jul 15 00:33:10.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:20.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:20.541: INFO: rc: 1 Jul 15 00:33:20.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:30.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:30.641: INFO: rc: 1 Jul 15 00:33:30.641: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:40.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:40.743: INFO: rc: 1 Jul 15 00:33:40.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:33:50.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:33:50.844: INFO: rc: 1 Jul 15 00:33:50.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:00.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:00.951: INFO: rc: 1 Jul 15 00:34:00.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:10.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:11.063: INFO: rc: 1 Jul 15 00:34:11.063: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:21.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:21.174: INFO: rc: 1 Jul 15 00:34:21.174: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:31.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:31.314: INFO: rc: 1 Jul 15 00:34:31.314: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:41.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:41.417: INFO: rc: 1 Jul 15 00:34:41.417: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:34:51.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:34:51.510: INFO: rc: 1 Jul 15 00:34:51.510: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:35:01.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:35:01.623: INFO: rc: 1 Jul 15 00:35:01.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:35:11.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:35:11.720: INFO: rc: 1 Jul 15 00:35:11.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:35:21.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:35:21.833: INFO: rc: 1 Jul 15 00:35:21.833: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 15 00:35:31.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7881 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:35:31.934: INFO: rc: 1 Jul 15 00:35:31.934: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jul 15 00:35:31.934: INFO: Scaling statefulset ss to 0 Jul 15 00:35:31.955: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 15 00:35:31.957: INFO: Deleting all statefulset in ns statefulset-7881 Jul 15 00:35:31.962: INFO: Scaling statefulset ss to 0 Jul 15 00:35:31.971: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:35:31.973: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:35:31.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7881" for this suite. • [SLOW TEST:359.471 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":294,"completed":211,"skipped":3322,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:35:31.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 15 00:35:32.036: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:35:47.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3709" for this suite. • [SLOW TEST:15.814 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":294,"completed":212,"skipped":3333,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:35:47.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 15 00:35:47.898: INFO: Waiting up to 5m0s for pod "downward-api-410502c5-7229-4e87-8979-41cb8faad978" in namespace "downward-api-718" to be "Succeeded or Failed" Jul 15 00:35:47.935: INFO: Pod "downward-api-410502c5-7229-4e87-8979-41cb8faad978": Phase="Pending", Reason="", readiness=false. Elapsed: 36.577883ms Jul 15 00:35:49.949: INFO: Pod "downward-api-410502c5-7229-4e87-8979-41cb8faad978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051178007s Jul 15 00:35:51.953: INFO: Pod "downward-api-410502c5-7229-4e87-8979-41cb8faad978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054823516s STEP: Saw pod success Jul 15 00:35:51.953: INFO: Pod "downward-api-410502c5-7229-4e87-8979-41cb8faad978" satisfied condition "Succeeded or Failed" Jul 15 00:35:51.956: INFO: Trying to get logs from node latest-worker pod downward-api-410502c5-7229-4e87-8979-41cb8faad978 container dapi-container: STEP: delete the pod Jul 15 00:35:51.999: INFO: Waiting for pod downward-api-410502c5-7229-4e87-8979-41cb8faad978 to disappear Jul 15 00:35:52.067: INFO: Pod downward-api-410502c5-7229-4e87-8979-41cb8faad978 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:35:52.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-718" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":294,"completed":213,"skipped":3367,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:35:52.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0b565b16-bc74-45e4-b33d-238da25aa365 STEP: Creating a pod to test consume configMaps Jul 15 00:35:52.149: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41" in namespace "projected-6654" to be "Succeeded or Failed" Jul 15 00:35:52.161: INFO: Pod "pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41": Phase="Pending", Reason="", readiness=false. Elapsed: 12.390382ms Jul 15 00:35:54.189: INFO: Pod "pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040388874s Jul 15 00:35:56.193: INFO: Pod "pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044390382s STEP: Saw pod success Jul 15 00:35:56.193: INFO: Pod "pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41" satisfied condition "Succeeded or Failed" Jul 15 00:35:56.196: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41 container projected-configmap-volume-test: STEP: delete the pod Jul 15 00:35:56.276: INFO: Waiting for pod pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41 to disappear Jul 15 00:35:56.342: INFO: Pod pod-projected-configmaps-103bd4b7-c139-40bb-badb-f140d0f1be41 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:35:56.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6654" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":214,"skipped":3367,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:35:56.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 15 00:35:56.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2007 /api/v1/namespaces/watch-2007/configmaps/e2e-watch-test-resource-version 5f55cb91-b5c1-4031-9a5a-f4a8429d57e2 1232857 0 2020-07-15 00:35:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-15 00:35:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:35:56.508: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2007 /api/v1/namespaces/watch-2007/configmaps/e2e-watch-test-resource-version 5f55cb91-b5c1-4031-9a5a-f4a8429d57e2 1232858 0 2020-07-15 00:35:56 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-15 00:35:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:35:56.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2007" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":294,"completed":215,"skipped":3370,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:35:56.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8068 STEP: creating service affinity-clusterip-transition in namespace services-8068 STEP: creating replication controller affinity-clusterip-transition in namespace services-8068 I0715 00:35:56.663843 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8068, replica count: 3 I0715 00:35:59.714240 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:36:02.714527 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:36:02.720: INFO: Creating new exec pod Jul 15 00:36:07.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-8068 execpod-affinityqrx4b -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jul 15 00:36:07.940: INFO: stderr: "I0715 00:36:07.882549 2846 log.go:181] (0xc000c1cd10) (0xc000e8e320) Create stream\nI0715 00:36:07.882623 2846 log.go:181] (0xc000c1cd10) (0xc000e8e320) Stream added, broadcasting: 1\nI0715 00:36:07.887304 2846 log.go:181] (0xc000c1cd10) Reply frame received for 1\nI0715 00:36:07.887361 2846 log.go:181] (0xc000c1cd10) (0xc0009a10e0) Create stream\nI0715 00:36:07.887380 2846 log.go:181] (0xc000c1cd10) (0xc0009a10e0) Stream added, broadcasting: 3\nI0715 00:36:07.888783 2846 log.go:181] (0xc000c1cd10) Reply frame received for 3\nI0715 00:36:07.888841 2846 log.go:181] (0xc000c1cd10) (0xc0009803c0) Create stream\nI0715 00:36:07.888855 2846 log.go:181] (0xc000c1cd10) (0xc0009803c0) Stream added, broadcasting: 5\nI0715 00:36:07.889641 2846 log.go:181] (0xc000c1cd10) Reply frame received for 5\nI0715 00:36:07.933607 2846 log.go:181] (0xc000c1cd10) Data frame received for 5\nI0715 00:36:07.933644 2846 log.go:181] (0xc0009803c0) (5) Data frame handling\nI0715 00:36:07.933671 2846 log.go:181] (0xc0009803c0) (5) Data frame sent\nI0715 00:36:07.933685 2846 log.go:181] (0xc000c1cd10) Data frame received for 5\nI0715 00:36:07.933694 2846 log.go:181] (0xc0009803c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0715 00:36:07.933718 2846 log.go:181] (0xc0009803c0) (5) Data frame sent\nI0715 00:36:07.933729 2846 log.go:181] (0xc000c1cd10) Data frame received for 5\nI0715 00:36:07.933739 2846 log.go:181] (0xc0009803c0) (5) Data frame handling\nI0715 00:36:07.933780 2846 log.go:181] (0xc000c1cd10) Data frame received for 3\nI0715 00:36:07.933816 2846 log.go:181] (0xc0009a10e0) (3) Data frame handling\nI0715 00:36:07.935691 2846 log.go:181] (0xc000c1cd10) Data frame received for 1\nI0715 00:36:07.935718 2846 log.go:181] (0xc000e8e320) (1) Data frame handling\nI0715 00:36:07.935729 2846 log.go:181] (0xc000e8e320) (1) Data frame sent\nI0715 00:36:07.935742 2846 log.go:181] (0xc000c1cd10) (0xc000e8e320) Stream removed, broadcasting: 1\nI0715 00:36:07.935757 2846 log.go:181] (0xc000c1cd10) Go away received\nI0715 00:36:07.936123 2846 log.go:181] (0xc000c1cd10) (0xc000e8e320) Stream removed, broadcasting: 1\nI0715 00:36:07.936142 2846 log.go:181] (0xc000c1cd10) (0xc0009a10e0) Stream removed, broadcasting: 3\nI0715 00:36:07.936150 2846 log.go:181] (0xc000c1cd10) (0xc0009803c0) Stream removed, broadcasting: 5\n" Jul 15 00:36:07.940: INFO: stdout: "" Jul 15 00:36:07.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-8068 execpod-affinityqrx4b -- /bin/sh -x -c nc -zv -t -w 2 10.107.76.76 80' Jul 15 00:36:08.174: INFO: stderr: "I0715 00:36:08.088372 2864 log.go:181] (0xc00062f3f0) (0xc000e9c460) Create stream\nI0715 00:36:08.088422 2864 log.go:181] (0xc00062f3f0) (0xc000e9c460) Stream added, broadcasting: 1\nI0715 00:36:08.092931 2864 log.go:181] (0xc00062f3f0) Reply frame received for 1\nI0715 00:36:08.092970 2864 log.go:181] (0xc00062f3f0) (0xc000c990e0) Create stream\nI0715 00:36:08.092981 2864 log.go:181] (0xc00062f3f0) (0xc000c990e0) Stream added, broadcasting: 3\nI0715 00:36:08.094069 2864 log.go:181] (0xc00062f3f0) Reply frame received for 3\nI0715 00:36:08.094100 2864 log.go:181] (0xc00062f3f0) (0xc0008881e0) Create stream\nI0715 00:36:08.094110 2864 log.go:181] (0xc00062f3f0) (0xc0008881e0) Stream added, broadcasting: 5\nI0715 00:36:08.095089 2864 log.go:181] (0xc00062f3f0) Reply frame received for 5\nI0715 00:36:08.168145 2864 log.go:181] (0xc00062f3f0) Data frame received for 3\nI0715 00:36:08.168172 2864 log.go:181] (0xc000c990e0) (3) Data frame handling\nI0715 00:36:08.168193 2864 log.go:181] (0xc00062f3f0) Data frame received for 5\nI0715 00:36:08.168201 2864 log.go:181] (0xc0008881e0) (5) Data frame handling\nI0715 00:36:08.168209 2864 log.go:181] (0xc0008881e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.76.76 80\nConnection to 10.107.76.76 80 port [tcp/http] succeeded!\nI0715 00:36:08.168416 2864 log.go:181] (0xc00062f3f0) Data frame received for 5\nI0715 00:36:08.168461 2864 log.go:181] (0xc0008881e0) (5) Data frame handling\nI0715 00:36:08.170399 2864 log.go:181] (0xc00062f3f0) Data frame received for 1\nI0715 00:36:08.170414 2864 log.go:181] (0xc000e9c460) (1) Data frame handling\nI0715 00:36:08.170421 2864 log.go:181] (0xc000e9c460) (1) Data frame sent\nI0715 00:36:08.170429 2864 log.go:181] (0xc00062f3f0) (0xc000e9c460) Stream removed, broadcasting: 1\nI0715 00:36:08.170662 2864 log.go:181] (0xc00062f3f0) (0xc000e9c460) Stream removed, broadcasting: 1\nI0715 00:36:08.170674 2864 log.go:181] (0xc00062f3f0) (0xc000c990e0) Stream removed, broadcasting: 3\nI0715 00:36:08.170771 2864 log.go:181] (0xc00062f3f0) Go away received\nI0715 00:36:08.170840 2864 log.go:181] (0xc00062f3f0) (0xc0008881e0) Stream removed, broadcasting: 5\n" Jul 15 00:36:08.175: INFO: stdout: "" Jul 15 00:36:08.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-8068 execpod-affinityqrx4b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.76.76:80/ ; done' Jul 15 00:36:08.505: INFO: stderr: "I0715 00:36:08.329744 2882 log.go:181] (0xc000997080) (0xc000cd57c0) Create stream\nI0715 00:36:08.329823 2882 log.go:181] (0xc000997080) (0xc000cd57c0) Stream added, broadcasting: 1\nI0715 00:36:08.334217 2882 log.go:181] (0xc000997080) Reply frame received for 1\nI0715 00:36:08.334265 2882 log.go:181] (0xc000997080) (0xc000476140) Create stream\nI0715 00:36:08.334277 2882 log.go:181] (0xc000997080) (0xc000476140) Stream added, broadcasting: 3\nI0715 00:36:08.335314 2882 log.go:181] (0xc000997080) Reply frame received for 3\nI0715 00:36:08.335368 2882 log.go:181] (0xc000997080) (0xc000476be0) Create stream\nI0715 00:36:08.335386 2882 log.go:181] (0xc000997080) (0xc000476be0) Stream added, broadcasting: 5\nI0715 00:36:08.336286 2882 log.go:181] (0xc000997080) Reply frame received for 5\nI0715 00:36:08.398769 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.398807 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.398821 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.398847 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.398857 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.398867 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.404836 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.404866 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.404894 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.405122 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.405152 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.405166 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.405182 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.405191 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.405200 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.411523 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.411543 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.411554 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.412292 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.412320 2882 log.go:181] (0xc000476be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.412343 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.412373 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.412388 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.412404 2882 log.go:181] (0xc000476be0) (5) Data frame sent\nI0715 00:36:08.418952 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.418972 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.418989 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.419370 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.419384 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.419392 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.419416 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.419435 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.419450 2882 log.go:181] (0xc000476be0) (5) Data frame sent\nI0715 00:36:08.419459 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.419467 2882 log.go:181] (0xc000476be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.419487 2882 log.go:181] (0xc000476be0) (5) Data frame sent\nI0715 00:36:08.427059 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.427086 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.427111 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.427702 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.427730 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.427745 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.427764 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.427779 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.427794 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.433176 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.433209 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.433240 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.433941 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.433968 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.433980 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.434003 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.434028 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.434048 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\nI0715 00:36:08.434064 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.434096 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.434115 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.439507 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.439523 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.439535 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.440315 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.440347 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.440361 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.440384 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.440400 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.440424 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.446145 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.446166 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.446183 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.447158 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.447190 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.447203 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.447227 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.447241 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.447253 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.450476 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.450492 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.450500 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.452140 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.452192 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.452212 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.452239 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.452256 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.452279 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.456283 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.456301 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.456312 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.457034 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.457065 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.457082 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.457102 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.457113 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.457125 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.463313 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.463329 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.463341 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.464083 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.464102 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.464112 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.464126 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.464133 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.464142 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.469165 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.469179 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.469186 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.470032 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.470067 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.470087 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.470106 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.470117 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.470133 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.477163 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.477181 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.477196 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.477844 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.477883 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.477910 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.477928 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.477943 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.477950 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.482004 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.482029 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.482047 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.482364 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.482401 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.482419 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.482444 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.482466 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.482491 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.486903 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.486921 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.486942 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.487308 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.487328 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.487336 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.487417 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.487438 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.487453 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.492200 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.492216 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.492225 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.492614 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.492635 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.492650 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0715 00:36:08.492856 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.492873 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.492880 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.492898 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.492908 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.492916 2882 log.go:181] (0xc000476be0) (5) Data frame sent\n http://10.107.76.76:80/\nI0715 00:36:08.498016 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.498041 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.498062 2882 log.go:181] (0xc000476140) (3) Data frame sent\nI0715 00:36:08.498860 2882 log.go:181] (0xc000997080) Data frame received for 3\nI0715 00:36:08.498881 2882 log.go:181] (0xc000476140) (3) Data frame handling\nI0715 00:36:08.498899 2882 log.go:181] (0xc000997080) Data frame received for 5\nI0715 00:36:08.498913 2882 log.go:181] (0xc000476be0) (5) Data frame handling\nI0715 00:36:08.500502 2882 log.go:181] (0xc000997080) Data frame received for 1\nI0715 00:36:08.500519 2882 log.go:181] (0xc000cd57c0) (1) Data frame handling\nI0715 00:36:08.500528 2882 log.go:181] (0xc000cd57c0) (1) Data frame sent\nI0715 00:36:08.500540 2882 log.go:181] (0xc000997080) (0xc000cd57c0) Stream removed, broadcasting: 1\nI0715 00:36:08.500554 2882 log.go:181] (0xc000997080) Go away received\nI0715 00:36:08.500994 2882 log.go:181] (0xc000997080) (0xc000cd57c0) Stream removed, broadcasting: 1\nI0715 00:36:08.501018 2882 log.go:181] (0xc000997080) (0xc000476140) Stream removed, broadcasting: 3\nI0715 00:36:08.501036 2882 log.go:181] (0xc000997080) (0xc000476be0) Stream removed, broadcasting: 5\n" Jul 15 00:36:08.506: INFO: stdout: "\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-wx65w\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wx65w\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-vzk5z\naffinity-clusterip-transition-wx65w" Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wx65w Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wx65w Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-vzk5z Jul 15 00:36:08.506: INFO: Received response from host: affinity-clusterip-transition-wx65w Jul 15 00:36:08.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-8068 execpod-affinityqrx4b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.76.76:80/ ; done' Jul 15 00:36:08.802: INFO: stderr: "I0715 00:36:08.654791 2900 log.go:181] (0xc000140420) (0xc000a8f2c0) Create stream\nI0715 00:36:08.654833 2900 log.go:181] (0xc000140420) (0xc000a8f2c0) Stream added, broadcasting: 1\nI0715 00:36:08.658750 2900 log.go:181] (0xc000140420) Reply frame received for 1\nI0715 00:36:08.658836 2900 log.go:181] (0xc000140420) (0xc000891220) Create stream\nI0715 00:36:08.658880 2900 log.go:181] (0xc000140420) (0xc000891220) Stream added, broadcasting: 3\nI0715 00:36:08.659757 2900 log.go:181] (0xc000140420) Reply frame received for 3\nI0715 00:36:08.659793 2900 log.go:181] (0xc000140420) (0xc0006ce500) Create stream\nI0715 00:36:08.659808 2900 log.go:181] (0xc000140420) (0xc0006ce500) Stream added, broadcasting: 5\nI0715 00:36:08.660659 2900 log.go:181] (0xc000140420) Reply frame received for 5\nI0715 00:36:08.699888 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.699907 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.699915 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.699933 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.699938 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.699954 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.703564 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.703601 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.703631 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.704388 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.704406 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.704421 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.704438 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.704456 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.704475 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.710990 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.711026 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.711057 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.711482 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.711505 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.711521 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.711534 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.711546 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.711577 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.715464 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.715502 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.715547 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.716143 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.716159 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.716201 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.716254 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.716283 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.716308 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.719553 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.719565 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.719570 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.720487 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.720494 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.720499 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.720533 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.720560 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.720583 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.727432 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.727451 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.727461 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.728040 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.728074 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.728086 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.728119 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.728175 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.728216 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.731700 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.731720 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.731735 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.732521 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.732561 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.732576 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.732598 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.732614 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.732640 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0715 00:36:08.732654 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.732674 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.732686 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n http://10.107.76.76:80/\nI0715 00:36:08.737475 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.737500 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.737519 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.738393 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.738409 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.738426 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\nI0715 00:36:08.738435 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.738443 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.738470 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\nI0715 00:36:08.738480 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.738491 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.738499 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.744466 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.744508 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.744540 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.745105 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.745155 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.745177 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.745207 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.745220 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.745234 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.748437 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.748465 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.748487 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.749456 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.749483 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.749493 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.749514 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.749537 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.749560 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.754924 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.754939 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.754957 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.755912 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.755944 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.755972 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.756006 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.756026 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.756047 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\nI0715 00:36:08.759694 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.759723 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.759750 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.760522 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.760551 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.760571 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.760595 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.760609 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.760632 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.766382 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.766401 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.766415 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.767192 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.767203 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.767209 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.767316 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.767347 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.767378 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.773816 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.773851 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.773881 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.774536 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.774567 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.774587 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.774609 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.774622 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.774641 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.779833 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.779853 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.779959 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.780555 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.780577 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.780590 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.780618 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.780646 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.780684 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.787103 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.787126 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.787146 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.788038 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.788054 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.788063 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.788094 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.788120 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.788147 2900 log.go:181] (0xc0006ce500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.76.76:80/\nI0715 00:36:08.794169 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.794193 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.794212 2900 log.go:181] (0xc000891220) (3) Data frame sent\nI0715 00:36:08.794992 2900 log.go:181] (0xc000140420) Data frame received for 3\nI0715 00:36:08.795011 2900 log.go:181] (0xc000891220) (3) Data frame handling\nI0715 00:36:08.795220 2900 log.go:181] (0xc000140420) Data frame received for 5\nI0715 00:36:08.795250 2900 log.go:181] (0xc0006ce500) (5) Data frame handling\nI0715 00:36:08.797148 2900 log.go:181] (0xc000140420) Data frame received for 1\nI0715 00:36:08.797179 2900 log.go:181] (0xc000a8f2c0) (1) Data frame handling\nI0715 00:36:08.797207 2900 log.go:181] (0xc000a8f2c0) (1) Data frame sent\nI0715 00:36:08.797235 2900 log.go:181] (0xc000140420) (0xc000a8f2c0) Stream removed, broadcasting: 1\nI0715 00:36:08.797264 2900 log.go:181] (0xc000140420) Go away received\nI0715 00:36:08.797661 2900 log.go:181] (0xc000140420) (0xc000a8f2c0) Stream removed, broadcasting: 1\nI0715 00:36:08.797686 2900 log.go:181] (0xc000140420) (0xc000891220) Stream removed, broadcasting: 3\nI0715 00:36:08.797701 2900 log.go:181] (0xc000140420) (0xc0006ce500) Stream removed, broadcasting: 5\n" Jul 15 00:36:08.803: INFO: stdout: "\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s\naffinity-clusterip-transition-wr74s" Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Received response from host: affinity-clusterip-transition-wr74s Jul 15 00:36:08.803: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8068, will wait for the garbage collector to delete the pods Jul 15 00:36:09.069: INFO: Deleting ReplicationController affinity-clusterip-transition took: 130.983853ms Jul 15 00:36:09.469: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.260728ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:36:19.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8068" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:22.696 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":216,"skipped":3374,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:36:19.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:36:19.305: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 15 00:36:22.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5461 create -f -' Jul 15 00:36:25.533: INFO: stderr: "" Jul 15 00:36:25.533: INFO: stdout: "e2e-test-crd-publish-openapi-5469-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 15 00:36:25.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5461 delete e2e-test-crd-publish-openapi-5469-crds test-cr' Jul 15 00:36:25.650: INFO: stderr: "" Jul 15 00:36:25.650: INFO: stdout: "e2e-test-crd-publish-openapi-5469-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 15 00:36:25.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5461 apply -f -' Jul 15 00:36:25.966: INFO: stderr: "" Jul 15 00:36:25.966: INFO: stdout: "e2e-test-crd-publish-openapi-5469-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 15 00:36:25.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5461 delete e2e-test-crd-publish-openapi-5469-crds test-cr' Jul 15 00:36:26.076: INFO: stderr: "" Jul 15 00:36:26.076: INFO: stdout: "e2e-test-crd-publish-openapi-5469-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 15 00:36:26.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5469-crds' Jul 15 00:36:26.343: INFO: stderr: "" Jul 15 00:36:26.343: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5469-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:36:28.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5461" for this suite. • [SLOW TEST:9.062 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":294,"completed":217,"skipped":3377,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:36:28.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2trh7 in namespace proxy-2326 I0715 00:36:28.457396 7 runners.go:190] Created replication controller with name: proxy-service-2trh7, namespace: proxy-2326, replica count: 1 I0715 00:36:29.507924 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:36:30.508165 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:36:31.508488 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:32.508715 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:33.508934 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:34.509191 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:35.509440 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:36.509689 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:37.509875 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0715 00:36:38.510088 7 runners.go:190] proxy-service-2trh7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:36:38.513: INFO: setup took 10.163650845s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 15 00:36:38.523: INFO: (0) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 9.179365ms) Jul 15 00:36:38.523: INFO: (0) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 9.41185ms) Jul 15 00:36:38.523: INFO: (0) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 9.383366ms) Jul 15 00:36:38.523: INFO: (0) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 9.499453ms) Jul 15 00:36:38.523: INFO: (0) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 9.269662ms) Jul 15 00:36:38.524: INFO: (0) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 9.964933ms) Jul 15 00:36:38.524: INFO: (0) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 10.045014ms) Jul 15 00:36:38.524: INFO: (0) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 10.19607ms) Jul 15 00:36:38.524: INFO: (0) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 10.135184ms) Jul 15 00:36:38.524: INFO: (0) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 10.165513ms) Jul 15 00:36:38.526: INFO: (0) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 12.081948ms) Jul 15 00:36:38.529: INFO: (0) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 3.430361ms) Jul 15 00:36:38.534: INFO: (1) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 3.483522ms) Jul 15 00:36:38.534: INFO: (1) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.763113ms) Jul 15 00:36:38.534: INFO: (1) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.812119ms) Jul 15 00:36:38.534: INFO: (1) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.802821ms) Jul 15 00:36:38.535: INFO: (1) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.07794ms) Jul 15 00:36:38.535: INFO: (1) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.294269ms) Jul 15 00:36:38.535: INFO: (1) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.312556ms) Jul 15 00:36:38.535: INFO: (1) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test<... (200; 3.764263ms) Jul 15 00:36:38.539: INFO: (2) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.744316ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 4.424134ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.607071ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 4.69774ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 4.643048ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.741792ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 4.740966ms) Jul 15 00:36:38.540: INFO: (2) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.807412ms) Jul 15 00:36:38.541: INFO: (2) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.89631ms) Jul 15 00:36:38.541: INFO: (2) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.879363ms) Jul 15 00:36:38.541: INFO: (2) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.066952ms) Jul 15 00:36:38.543: INFO: (3) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 2.395986ms) Jul 15 00:36:38.543: INFO: (3) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 2.501077ms) Jul 15 00:36:38.544: INFO: (3) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 2.682511ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.669571ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 3.897973ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.825423ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.864492ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.93872ms) Jul 15 00:36:38.545: INFO: (3) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.286042ms) Jul 15 00:36:38.546: INFO: (3) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.563274ms) Jul 15 00:36:38.546: INFO: (3) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test<... (200; 3.97739ms) Jul 15 00:36:38.552: INFO: (4) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.788817ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.225455ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 5.362704ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 5.492098ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 5.700166ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 5.67165ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 5.743515ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.79486ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 5.802092ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 5.785065ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.818701ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 5.861181ms) Jul 15 00:36:38.553: INFO: (4) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 5.959663ms) Jul 15 00:36:38.557: INFO: (5) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 3.240306ms) Jul 15 00:36:38.557: INFO: (5) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 3.705954ms) Jul 15 00:36:38.557: INFO: (5) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.627062ms) Jul 15 00:36:38.557: INFO: (5) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 3.709927ms) Jul 15 00:36:38.557: INFO: (5) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.763997ms) Jul 15 00:36:38.558: INFO: (5) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 4.2534ms) Jul 15 00:36:38.558: INFO: (5) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.339931ms) Jul 15 00:36:38.558: INFO: (5) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 4.341263ms) Jul 15 00:36:38.558: INFO: (5) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.421316ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 5.180615ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 5.223866ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.28788ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 5.26743ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 5.312025ms) Jul 15 00:36:38.559: INFO: (5) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.382499ms) Jul 15 00:36:38.562: INFO: (6) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 3.086675ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 4.160637ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.171593ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 4.241382ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.239645ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 4.304071ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 4.473403ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.417671ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 4.506912ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.560927ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 4.629406ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.710368ms) Jul 15 00:36:38.563: INFO: (6) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 4.672117ms) Jul 15 00:36:38.564: INFO: (6) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.747341ms) Jul 15 00:36:38.564: INFO: (6) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.688599ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 2.760054ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.047476ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 3.378138ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 2.499526ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 2.945507ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.015018ms) Jul 15 00:36:38.567: INFO: (7) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.291885ms) Jul 15 00:36:38.568: INFO: (7) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 3.988569ms) Jul 15 00:36:38.569: INFO: (7) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 4.361604ms) Jul 15 00:36:38.569: INFO: (7) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 4.519398ms) Jul 15 00:36:38.569: INFO: (7) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 3.952601ms) Jul 15 00:36:38.569: INFO: (7) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.027423ms) Jul 15 00:36:38.569: INFO: (7) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.473077ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 3.552124ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.633436ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.098783ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.136645ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.075129ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.175887ms) Jul 15 00:36:38.573: INFO: (8) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.547191ms) Jul 15 00:36:38.574: INFO: (8) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.914031ms) Jul 15 00:36:38.574: INFO: (8) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 5.125831ms) Jul 15 00:36:38.574: INFO: (8) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 5.069534ms) Jul 15 00:36:38.574: INFO: (8) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.123357ms) Jul 15 00:36:38.574: INFO: (8) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.113709ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.69036ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.721724ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 4.881051ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.857965ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 4.892552ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 4.946694ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.949622ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.945639ms) Jul 15 00:36:38.579: INFO: (9) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 5.017584ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.442117ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 5.894207ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 5.871324ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.887478ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.94932ms) Jul 15 00:36:38.580: INFO: (9) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 6.035578ms) Jul 15 00:36:38.583: INFO: (10) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 2.888669ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 3.146654ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 3.237044ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.190517ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 3.245287ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.221891ms) Jul 15 00:36:38.584: INFO: (10) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: ... (200; 6.068335ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 6.131559ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 6.087694ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 6.129223ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 6.473496ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 6.513305ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 6.721073ms) Jul 15 00:36:38.592: INFO: (11) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 6.7455ms) Jul 15 00:36:38.593: INFO: (11) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 7.8687ms) Jul 15 00:36:38.593: INFO: (11) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 7.901027ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 3.418524ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 3.422498ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 3.560879ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.622805ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.638388ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 3.730172ms) Jul 15 00:36:38.597: INFO: (12) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 3.683507ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.778281ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.783302ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 3.80277ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 3.774136ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.783608ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: ... (200; 3.909438ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.965056ms) Jul 15 00:36:38.602: INFO: (13) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.039605ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.961189ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 5.031606ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.095197ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.132346ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 5.135301ms) Jul 15 00:36:38.603: INFO: (13) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 5.131101ms) Jul 15 00:36:38.606: INFO: (14) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 2.301238ms) Jul 15 00:36:38.606: INFO: (14) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 2.348734ms) Jul 15 00:36:38.606: INFO: (14) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 2.396088ms) Jul 15 00:36:38.606: INFO: (14) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 2.403997ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 3.242454ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 3.545759ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.834445ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 3.851166ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.024089ms) Jul 15 00:36:38.607: INFO: (14) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 3.997951ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.063906ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 4.020548ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 4.122503ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 4.226271ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.358755ms) Jul 15 00:36:38.608: INFO: (14) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: ... (200; 1.778766ms) Jul 15 00:36:38.611: INFO: (15) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 2.88763ms) Jul 15 00:36:38.611: INFO: (15) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 2.838171ms) Jul 15 00:36:38.612: INFO: (15) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.00343ms) Jul 15 00:36:38.612: INFO: (15) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.167118ms) Jul 15 00:36:38.612: INFO: (15) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 4.276022ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.863916ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test<... (200; 4.865666ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.871558ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 4.917516ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 5.536929ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.49214ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.518578ms) Jul 15 00:36:38.613: INFO: (15) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 5.589092ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.010298ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 2.952512ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 2.968054ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 3.474124ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 3.499794ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 3.565152ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 3.766639ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 3.88971ms) Jul 15 00:36:38.618: INFO: (16) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 3.964317ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 3.923657ms) Jul 15 00:36:38.618: INFO: (16) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.005295ms) Jul 15 00:36:38.617: INFO: (16) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 3.978008ms) Jul 15 00:36:38.618: INFO: (16) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 4.133021ms) Jul 15 00:36:38.618: INFO: (16) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 4.082338ms) Jul 15 00:36:38.618: INFO: (16) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.318165ms) Jul 15 00:36:38.623: INFO: (17) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.578826ms) Jul 15 00:36:38.623: INFO: (17) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.563279ms) Jul 15 00:36:38.623: INFO: (17) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 5.494245ms) Jul 15 00:36:38.623: INFO: (17) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 5.452738ms) Jul 15 00:36:38.623: INFO: (17) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 5.499922ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 5.532587ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: test (200; 6.049259ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 6.053089ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 5.989346ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 6.005207ms) Jul 15 00:36:38.624: INFO: (17) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 6.16008ms) Jul 15 00:36:38.628: INFO: (18) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: ... (200; 5.144375ms) Jul 15 00:36:38.629: INFO: (18) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 5.047295ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 5.343451ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 5.400721ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 5.346061ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 5.400151ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 5.584858ms) Jul 15 00:36:38.630: INFO: (18) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 5.80023ms) Jul 15 00:36:38.632: INFO: (19) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:1080/proxy/: test<... (200; 2.429588ms) Jul 15 00:36:38.633: INFO: (19) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn/proxy/: test (200; 2.531265ms) Jul 15 00:36:38.633: INFO: (19) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:460/proxy/: tls baz (200; 2.886997ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname1/proxy/: tls baz (200; 4.54477ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:462/proxy/: tls qux (200; 4.536835ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:1080/proxy/: ... (200; 4.568854ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname2/proxy/: bar (200; 4.607006ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.564412ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/https:proxy-service-2trh7:tlsportname2/proxy/: tls qux (200; 4.639444ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/proxy-service-2trh7:portname1/proxy/: foo (200; 4.653597ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 4.894508ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname1/proxy/: foo (200; 4.882601ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/services/http:proxy-service-2trh7:portname2/proxy/: bar (200; 4.890925ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:160/proxy/: foo (200; 4.890495ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/http:proxy-service-2trh7-5kzjn:162/proxy/: bar (200; 5.048646ms) Jul 15 00:36:38.635: INFO: (19) /api/v1/namespaces/proxy-2326/pods/https:proxy-service-2trh7-5kzjn:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4883/configmap-test-2fc8585c-2d73-4a61-96f1-d06898d2371e STEP: Creating a pod to test consume configMaps Jul 15 00:36:41.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5" in namespace "configmap-4883" to be "Succeeded or Failed" Jul 15 00:36:41.640: INFO: Pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24856ms Jul 15 00:36:43.763: INFO: Pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12647177s Jul 15 00:36:45.766: INFO: Pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129900845s Jul 15 00:36:47.770: INFO: Pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133674308s STEP: Saw pod success Jul 15 00:36:47.770: INFO: Pod "pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5" satisfied condition "Succeeded or Failed" Jul 15 00:36:47.772: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5 container env-test: STEP: delete the pod Jul 15 00:36:47.791: INFO: Waiting for pod pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5 to disappear Jul 15 00:36:47.796: INFO: Pod pod-configmaps-02b1b797-5359-421c-9ec6-52596f38e3e5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:36:47.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4883" for this suite. • [SLOW TEST:6.302 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":294,"completed":219,"skipped":3431,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:36:47.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jul 15 00:36:47.916: INFO: Waiting up to 5m0s for pod "var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf" in namespace "var-expansion-6336" to be "Succeeded or Failed" Jul 15 00:36:47.935: INFO: Pod "var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.764037ms Jul 15 00:36:50.012: INFO: Pod "var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095162077s Jul 15 00:36:52.038: INFO: Pod "var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121248112s STEP: Saw pod success Jul 15 00:36:52.038: INFO: Pod "var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf" satisfied condition "Succeeded or Failed" Jul 15 00:36:52.040: INFO: Trying to get logs from node latest-worker pod var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf container dapi-container: STEP: delete the pod Jul 15 00:36:52.107: INFO: Waiting for pod var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf to disappear Jul 15 00:36:52.113: INFO: Pod var-expansion-bf7a2f7e-3338-4c24-b4b6-90a53dc009cf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:36:52.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6336" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":294,"completed":220,"skipped":3439,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:36:52.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 15 00:36:52.287: INFO: Waiting up to 5m0s for pod "pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43" in namespace "emptydir-145" to be "Succeeded or Failed" Jul 15 00:36:52.325: INFO: Pod "pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43": Phase="Pending", Reason="", readiness=false. Elapsed: 37.974726ms Jul 15 00:36:54.653: INFO: Pod "pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36591673s Jul 15 00:36:56.679: INFO: Pod "pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.391938964s STEP: Saw pod success Jul 15 00:36:56.679: INFO: Pod "pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43" satisfied condition "Succeeded or Failed" Jul 15 00:36:56.682: INFO: Trying to get logs from node latest-worker pod pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43 container test-container: STEP: delete the pod Jul 15 00:36:56.922: INFO: Waiting for pod pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43 to disappear Jul 15 00:36:56.927: INFO: Pod pod-6c489ff6-ef2c-492b-8a98-59b042cd0d43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:36:56.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-145" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":221,"skipped":3459,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:36:56.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-1673f62c-6c1e-4270-b504-4f004dae1a8d in namespace container-probe-6529 Jul 15 00:37:01.054: INFO: Started pod liveness-1673f62c-6c1e-4270-b504-4f004dae1a8d in namespace container-probe-6529 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 00:37:01.057: INFO: Initial restart count of pod liveness-1673f62c-6c1e-4270-b504-4f004dae1a8d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:41:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6529" for this suite. • [SLOW TEST:244.929 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":294,"completed":222,"skipped":3470,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:41:01.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4f169f5b-826f-423b-b876-e500234b74e8 STEP: Creating a pod to test consume configMaps Jul 15 00:41:02.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a" in namespace "projected-6621" to be "Succeeded or Failed" Jul 15 00:41:02.286: INFO: Pod "pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05798ms Jul 15 00:41:04.291: INFO: Pod "pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007389602s Jul 15 00:41:06.295: INFO: Pod "pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011328662s STEP: Saw pod success Jul 15 00:41:06.295: INFO: Pod "pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a" satisfied condition "Succeeded or Failed" Jul 15 00:41:06.298: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a container projected-configmap-volume-test: STEP: delete the pod Jul 15 00:41:06.373: INFO: Waiting for pod pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a to disappear Jul 15 00:41:06.395: INFO: Pod pod-projected-configmaps-c9225f92-18fb-4776-ad65-7d458e57a63a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:41:06.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6621" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":223,"skipped":3499,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:41:06.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:41:22.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-244" for this suite. • [SLOW TEST:16.293 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":294,"completed":224,"skipped":3512,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:41:22.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jul 15 00:41:26.840: INFO: Pod pod-hostip-7a498224-6454-42f3-bcb9-4156b11b40d3 has hostIP: 172.18.0.14 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:41:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1445" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":294,"completed":225,"skipped":3513,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:41:26.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4970 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4970 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4970 Jul 15 00:41:26.982: INFO: Found 0 stateful pods, waiting for 1 Jul 15 00:41:36.986: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 15 00:41:36.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:41:37.268: INFO: stderr: "I0715 00:41:37.134927 3010 log.go:181] (0xc000f3b080) (0xc00077bae0) Create stream\nI0715 00:41:37.135011 3010 log.go:181] (0xc000f3b080) (0xc00077bae0) Stream added, broadcasting: 1\nI0715 00:41:37.143345 3010 log.go:181] (0xc000f3b080) Reply frame received for 1\nI0715 00:41:37.143380 3010 log.go:181] (0xc000f3b080) (0xc00036ab40) Create stream\nI0715 00:41:37.143389 3010 log.go:181] (0xc000f3b080) (0xc00036ab40) Stream added, broadcasting: 3\nI0715 00:41:37.144174 3010 log.go:181] (0xc000f3b080) Reply frame received for 3\nI0715 00:41:37.144215 3010 log.go:181] (0xc000f3b080) (0xc00019d360) Create stream\nI0715 00:41:37.144237 3010 log.go:181] (0xc000f3b080) (0xc00019d360) Stream added, broadcasting: 5\nI0715 00:41:37.145209 3010 log.go:181] (0xc000f3b080) Reply frame received for 5\nI0715 00:41:37.229302 3010 log.go:181] (0xc000f3b080) Data frame received for 5\nI0715 00:41:37.229334 3010 log.go:181] (0xc00019d360) (5) Data frame handling\nI0715 00:41:37.229355 3010 log.go:181] (0xc00019d360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:41:37.261339 3010 log.go:181] (0xc000f3b080) Data frame received for 3\nI0715 00:41:37.261365 3010 log.go:181] (0xc00036ab40) (3) Data frame handling\nI0715 00:41:37.261382 3010 log.go:181] (0xc00036ab40) (3) Data frame sent\nI0715 00:41:37.261492 3010 log.go:181] (0xc000f3b080) Data frame received for 5\nI0715 00:41:37.261510 3010 log.go:181] (0xc00019d360) (5) Data frame handling\nI0715 00:41:37.262197 3010 log.go:181] (0xc000f3b080) Data frame received for 3\nI0715 00:41:37.262213 3010 log.go:181] (0xc00036ab40) (3) Data frame handling\nI0715 00:41:37.263766 3010 log.go:181] (0xc000f3b080) Data frame received for 1\nI0715 00:41:37.263817 3010 log.go:181] (0xc00077bae0) (1) Data frame handling\nI0715 00:41:37.263837 3010 log.go:181] (0xc00077bae0) (1) Data frame sent\nI0715 00:41:37.263850 3010 log.go:181] (0xc000f3b080) (0xc00077bae0) Stream removed, broadcasting: 1\nI0715 00:41:37.263876 3010 log.go:181] (0xc000f3b080) Go away received\nI0715 00:41:37.264346 3010 log.go:181] (0xc000f3b080) (0xc00077bae0) Stream removed, broadcasting: 1\nI0715 00:41:37.264366 3010 log.go:181] (0xc000f3b080) (0xc00036ab40) Stream removed, broadcasting: 3\nI0715 00:41:37.264376 3010 log.go:181] (0xc000f3b080) (0xc00019d360) Stream removed, broadcasting: 5\n" Jul 15 00:41:37.268: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:41:37.268: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:41:37.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 15 00:41:47.275: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:41:47.275: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:41:47.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999232s Jul 15 00:41:48.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992651995s Jul 15 00:41:49.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987972596s Jul 15 00:41:50.309: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98323556s Jul 15 00:41:51.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978093476s Jul 15 00:41:52.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973513388s Jul 15 00:41:53.320: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969176395s Jul 15 00:41:54.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.966720191s Jul 15 00:41:55.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962102629s Jul 15 00:41:56.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 957.683509ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4970 Jul 15 00:41:57.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:41:57.567: INFO: stderr: "I0715 00:41:57.477250 3028 log.go:181] (0xc000c9c160) (0xc00069c280) Create stream\nI0715 00:41:57.477302 3028 log.go:181] (0xc000c9c160) (0xc00069c280) Stream added, broadcasting: 1\nI0715 00:41:57.479547 3028 log.go:181] (0xc000c9c160) Reply frame received for 1\nI0715 00:41:57.479582 3028 log.go:181] (0xc000c9c160) (0xc00061cfa0) Create stream\nI0715 00:41:57.479592 3028 log.go:181] (0xc000c9c160) (0xc00061cfa0) Stream added, broadcasting: 3\nI0715 00:41:57.480532 3028 log.go:181] (0xc000c9c160) Reply frame received for 3\nI0715 00:41:57.480602 3028 log.go:181] (0xc000c9c160) (0xc00069d720) Create stream\nI0715 00:41:57.480633 3028 log.go:181] (0xc000c9c160) (0xc00069d720) Stream added, broadcasting: 5\nI0715 00:41:57.481783 3028 log.go:181] (0xc000c9c160) Reply frame received for 5\nI0715 00:41:57.562010 3028 log.go:181] (0xc000c9c160) Data frame received for 5\nI0715 00:41:57.562040 3028 log.go:181] (0xc00069d720) (5) Data frame handling\nI0715 00:41:57.562047 3028 log.go:181] (0xc00069d720) (5) Data frame sent\nI0715 00:41:57.562053 3028 log.go:181] (0xc000c9c160) Data frame received for 5\nI0715 00:41:57.562058 3028 log.go:181] (0xc00069d720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:41:57.562066 3028 log.go:181] (0xc000c9c160) Data frame received for 3\nI0715 00:41:57.562071 3028 log.go:181] (0xc00061cfa0) (3) Data frame handling\nI0715 00:41:57.562077 3028 log.go:181] (0xc00061cfa0) (3) Data frame sent\nI0715 00:41:57.562081 3028 log.go:181] (0xc000c9c160) Data frame received for 3\nI0715 00:41:57.562085 3028 log.go:181] (0xc00061cfa0) (3) Data frame handling\nI0715 00:41:57.563094 3028 log.go:181] (0xc000c9c160) Data frame received for 1\nI0715 00:41:57.563114 3028 log.go:181] (0xc00069c280) (1) Data frame handling\nI0715 00:41:57.563123 3028 log.go:181] (0xc00069c280) (1) Data frame sent\nI0715 00:41:57.563156 3028 log.go:181] (0xc000c9c160) (0xc00069c280) Stream removed, broadcasting: 1\nI0715 00:41:57.563175 3028 log.go:181] (0xc000c9c160) Go away received\nI0715 00:41:57.563508 3028 log.go:181] (0xc000c9c160) (0xc00069c280) Stream removed, broadcasting: 1\nI0715 00:41:57.563529 3028 log.go:181] (0xc000c9c160) (0xc00061cfa0) Stream removed, broadcasting: 3\nI0715 00:41:57.563534 3028 log.go:181] (0xc000c9c160) (0xc00069d720) Stream removed, broadcasting: 5\n" Jul 15 00:41:57.567: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:41:57.567: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:41:57.571: INFO: Found 1 stateful pods, waiting for 3 Jul 15 00:42:07.577: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:42:07.577: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:42:07.577: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 15 00:42:07.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:42:07.799: INFO: stderr: "I0715 00:42:07.730847 3047 log.go:181] (0xc00003a2c0) (0xc000377f40) Create stream\nI0715 00:42:07.730903 3047 log.go:181] (0xc00003a2c0) (0xc000377f40) Stream added, broadcasting: 1\nI0715 00:42:07.736303 3047 log.go:181] (0xc00003a2c0) Reply frame received for 1\nI0715 00:42:07.736383 3047 log.go:181] (0xc00003a2c0) (0xc0009c0140) Create stream\nI0715 00:42:07.736413 3047 log.go:181] (0xc00003a2c0) (0xc0009c0140) Stream added, broadcasting: 3\nI0715 00:42:07.737511 3047 log.go:181] (0xc00003a2c0) Reply frame received for 3\nI0715 00:42:07.737537 3047 log.go:181] (0xc00003a2c0) (0xc00068c500) Create stream\nI0715 00:42:07.737545 3047 log.go:181] (0xc00003a2c0) (0xc00068c500) Stream added, broadcasting: 5\nI0715 00:42:07.738468 3047 log.go:181] (0xc00003a2c0) Reply frame received for 5\nI0715 00:42:07.790907 3047 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0715 00:42:07.791057 3047 log.go:181] (0xc0009c0140) (3) Data frame handling\nI0715 00:42:07.791137 3047 log.go:181] (0xc0009c0140) (3) Data frame sent\nI0715 00:42:07.791360 3047 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0715 00:42:07.791434 3047 log.go:181] (0xc00068c500) (5) Data frame handling\nI0715 00:42:07.791497 3047 log.go:181] (0xc00068c500) (5) Data frame sent\nI0715 00:42:07.791567 3047 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0715 00:42:07.791635 3047 log.go:181] (0xc00068c500) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:42:07.793312 3047 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0715 00:42:07.793334 3047 log.go:181] (0xc0009c0140) (3) Data frame handling\nI0715 00:42:07.794892 3047 log.go:181] (0xc00003a2c0) Data frame received for 1\nI0715 00:42:07.794906 3047 log.go:181] (0xc000377f40) (1) Data frame handling\nI0715 00:42:07.794916 3047 log.go:181] (0xc000377f40) (1) Data frame sent\nI0715 00:42:07.794923 3047 log.go:181] (0xc00003a2c0) (0xc000377f40) Stream removed, broadcasting: 1\nI0715 00:42:07.795030 3047 log.go:181] (0xc00003a2c0) Go away received\nI0715 00:42:07.795171 3047 log.go:181] (0xc00003a2c0) (0xc000377f40) Stream removed, broadcasting: 1\nI0715 00:42:07.795185 3047 log.go:181] (0xc00003a2c0) (0xc0009c0140) Stream removed, broadcasting: 3\nI0715 00:42:07.795191 3047 log.go:181] (0xc00003a2c0) (0xc00068c500) Stream removed, broadcasting: 5\n" Jul 15 00:42:07.799: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:42:07.799: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:42:07.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:42:08.053: INFO: stderr: "I0715 00:42:07.951379 3065 log.go:181] (0xc00057af20) (0xc000c49900) Create stream\nI0715 00:42:07.951429 3065 log.go:181] (0xc00057af20) (0xc000c49900) Stream added, broadcasting: 1\nI0715 00:42:07.956095 3065 log.go:181] (0xc00057af20) Reply frame received for 1\nI0715 00:42:07.956145 3065 log.go:181] (0xc00057af20) (0xc000c350e0) Create stream\nI0715 00:42:07.956162 3065 log.go:181] (0xc00057af20) (0xc000c350e0) Stream added, broadcasting: 3\nI0715 00:42:07.957121 3065 log.go:181] (0xc00057af20) Reply frame received for 3\nI0715 00:42:07.957153 3065 log.go:181] (0xc00057af20) (0xc0007b0aa0) Create stream\nI0715 00:42:07.957163 3065 log.go:181] (0xc00057af20) (0xc0007b0aa0) Stream added, broadcasting: 5\nI0715 00:42:07.958106 3065 log.go:181] (0xc00057af20) Reply frame received for 5\nI0715 00:42:08.011234 3065 log.go:181] (0xc00057af20) Data frame received for 5\nI0715 00:42:08.011261 3065 log.go:181] (0xc0007b0aa0) (5) Data frame handling\nI0715 00:42:08.011282 3065 log.go:181] (0xc0007b0aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:42:08.045238 3065 log.go:181] (0xc00057af20) Data frame received for 3\nI0715 00:42:08.045276 3065 log.go:181] (0xc000c350e0) (3) Data frame handling\nI0715 00:42:08.045312 3065 log.go:181] (0xc000c350e0) (3) Data frame sent\nI0715 00:42:08.045332 3065 log.go:181] (0xc00057af20) Data frame received for 3\nI0715 00:42:08.045350 3065 log.go:181] (0xc000c350e0) (3) Data frame handling\nI0715 00:42:08.045543 3065 log.go:181] (0xc00057af20) Data frame received for 5\nI0715 00:42:08.045631 3065 log.go:181] (0xc0007b0aa0) (5) Data frame handling\nI0715 00:42:08.047408 3065 log.go:181] (0xc00057af20) Data frame received for 1\nI0715 00:42:08.047423 3065 log.go:181] (0xc000c49900) (1) Data frame handling\nI0715 00:42:08.047433 3065 log.go:181] (0xc000c49900) (1) Data frame sent\nI0715 00:42:08.047751 3065 log.go:181] (0xc00057af20) (0xc000c49900) Stream removed, broadcasting: 1\nI0715 00:42:08.047953 3065 log.go:181] (0xc00057af20) Go away received\nI0715 00:42:08.048216 3065 log.go:181] (0xc00057af20) (0xc000c49900) Stream removed, broadcasting: 1\nI0715 00:42:08.048232 3065 log.go:181] (0xc00057af20) (0xc000c350e0) Stream removed, broadcasting: 3\nI0715 00:42:08.048242 3065 log.go:181] (0xc00057af20) (0xc0007b0aa0) Stream removed, broadcasting: 5\n" Jul 15 00:42:08.053: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:42:08.053: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:42:08.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 15 00:42:08.299: INFO: stderr: "I0715 00:42:08.187382 3083 log.go:181] (0xc001032000) (0xc00019ec80) Create stream\nI0715 00:42:08.187434 3083 log.go:181] (0xc001032000) (0xc00019ec80) Stream added, broadcasting: 1\nI0715 00:42:08.189425 3083 log.go:181] (0xc001032000) Reply frame received for 1\nI0715 00:42:08.189470 3083 log.go:181] (0xc001032000) (0xc0003c3860) Create stream\nI0715 00:42:08.189484 3083 log.go:181] (0xc001032000) (0xc0003c3860) Stream added, broadcasting: 3\nI0715 00:42:08.190407 3083 log.go:181] (0xc001032000) Reply frame received for 3\nI0715 00:42:08.190457 3083 log.go:181] (0xc001032000) (0xc0006d7360) Create stream\nI0715 00:42:08.190472 3083 log.go:181] (0xc001032000) (0xc0006d7360) Stream added, broadcasting: 5\nI0715 00:42:08.191304 3083 log.go:181] (0xc001032000) Reply frame received for 5\nI0715 00:42:08.264091 3083 log.go:181] (0xc001032000) Data frame received for 5\nI0715 00:42:08.264118 3083 log.go:181] (0xc0006d7360) (5) Data frame handling\nI0715 00:42:08.264139 3083 log.go:181] (0xc0006d7360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0715 00:42:08.290552 3083 log.go:181] (0xc001032000) Data frame received for 3\nI0715 00:42:08.290581 3083 log.go:181] (0xc0003c3860) (3) Data frame handling\nI0715 00:42:08.290598 3083 log.go:181] (0xc0003c3860) (3) Data frame sent\nI0715 00:42:08.290972 3083 log.go:181] (0xc001032000) Data frame received for 3\nI0715 00:42:08.291016 3083 log.go:181] (0xc0003c3860) (3) Data frame handling\nI0715 00:42:08.291058 3083 log.go:181] (0xc001032000) Data frame received for 5\nI0715 00:42:08.291117 3083 log.go:181] (0xc0006d7360) (5) Data frame handling\nI0715 00:42:08.292918 3083 log.go:181] (0xc001032000) Data frame received for 1\nI0715 00:42:08.292951 3083 log.go:181] (0xc00019ec80) (1) Data frame handling\nI0715 00:42:08.292979 3083 log.go:181] (0xc00019ec80) (1) Data frame sent\nI0715 00:42:08.293016 3083 log.go:181] (0xc001032000) (0xc00019ec80) Stream removed, broadcasting: 1\nI0715 00:42:08.293125 3083 log.go:181] (0xc001032000) Go away received\nI0715 00:42:08.293564 3083 log.go:181] (0xc001032000) (0xc00019ec80) Stream removed, broadcasting: 1\nI0715 00:42:08.293591 3083 log.go:181] (0xc001032000) (0xc0003c3860) Stream removed, broadcasting: 3\nI0715 00:42:08.293603 3083 log.go:181] (0xc001032000) (0xc0006d7360) Stream removed, broadcasting: 5\n" Jul 15 00:42:08.299: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 15 00:42:08.299: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 15 00:42:08.299: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:42:08.303: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 15 00:42:18.313: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:42:18.313: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:42:18.313: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 15 00:42:18.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999551s Jul 15 00:42:19.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.934927266s Jul 15 00:42:20.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.931383341s Jul 15 00:42:21.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.92711636s Jul 15 00:42:22.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.898691412s Jul 15 00:42:23.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.894179985s Jul 15 00:42:24.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.889800317s Jul 15 00:42:25.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.885302429s Jul 15 00:42:26.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.880190549s Jul 15 00:42:27.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 876.290131ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4970 Jul 15 00:42:28.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:42:28.681: INFO: stderr: "I0715 00:42:28.603881 3101 log.go:181] (0xc00016f080) (0xc0007b5860) Create stream\nI0715 00:42:28.603935 3101 log.go:181] (0xc00016f080) (0xc0007b5860) Stream added, broadcasting: 1\nI0715 00:42:28.609907 3101 log.go:181] (0xc00016f080) Reply frame received for 1\nI0715 00:42:28.609953 3101 log.go:181] (0xc00016f080) (0xc000691220) Create stream\nI0715 00:42:28.609969 3101 log.go:181] (0xc00016f080) (0xc000691220) Stream added, broadcasting: 3\nI0715 00:42:28.610864 3101 log.go:181] (0xc00016f080) Reply frame received for 3\nI0715 00:42:28.610890 3101 log.go:181] (0xc00016f080) (0xc000522780) Create stream\nI0715 00:42:28.610900 3101 log.go:181] (0xc00016f080) (0xc000522780) Stream added, broadcasting: 5\nI0715 00:42:28.611805 3101 log.go:181] (0xc00016f080) Reply frame received for 5\nI0715 00:42:28.673140 3101 log.go:181] (0xc00016f080) Data frame received for 3\nI0715 00:42:28.673201 3101 log.go:181] (0xc000691220) (3) Data frame handling\nI0715 00:42:28.673225 3101 log.go:181] (0xc000691220) (3) Data frame sent\nI0715 00:42:28.673259 3101 log.go:181] (0xc00016f080) Data frame received for 5\nI0715 00:42:28.673290 3101 log.go:181] (0xc000522780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:42:28.673310 3101 log.go:181] (0xc00016f080) Data frame received for 3\nI0715 00:42:28.673331 3101 log.go:181] (0xc000691220) (3) Data frame handling\nI0715 00:42:28.673355 3101 log.go:181] (0xc000522780) (5) Data frame sent\nI0715 00:42:28.673394 3101 log.go:181] (0xc00016f080) Data frame received for 5\nI0715 00:42:28.673420 3101 log.go:181] (0xc000522780) (5) Data frame handling\nI0715 00:42:28.675084 3101 log.go:181] (0xc00016f080) Data frame received for 1\nI0715 00:42:28.675096 3101 log.go:181] (0xc0007b5860) (1) Data frame handling\nI0715 00:42:28.675102 3101 log.go:181] (0xc0007b5860) (1) Data frame sent\nI0715 00:42:28.675231 3101 log.go:181] (0xc00016f080) (0xc0007b5860) Stream removed, broadcasting: 1\nI0715 00:42:28.675412 3101 log.go:181] (0xc00016f080) Go away received\nI0715 00:42:28.675716 3101 log.go:181] (0xc00016f080) (0xc0007b5860) Stream removed, broadcasting: 1\nI0715 00:42:28.675735 3101 log.go:181] (0xc00016f080) (0xc000691220) Stream removed, broadcasting: 3\nI0715 00:42:28.675746 3101 log.go:181] (0xc00016f080) (0xc000522780) Stream removed, broadcasting: 5\n" Jul 15 00:42:28.681: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:42:28.681: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:42:28.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:42:28.895: INFO: stderr: "I0715 00:42:28.828043 3119 log.go:181] (0xc000cccbb0) (0xc000b09220) Create stream\nI0715 00:42:28.828109 3119 log.go:181] (0xc000cccbb0) (0xc000b09220) Stream added, broadcasting: 1\nI0715 00:42:28.830069 3119 log.go:181] (0xc000cccbb0) Reply frame received for 1\nI0715 00:42:28.830105 3119 log.go:181] (0xc000cccbb0) (0xc000b097c0) Create stream\nI0715 00:42:28.830116 3119 log.go:181] (0xc000cccbb0) (0xc000b097c0) Stream added, broadcasting: 3\nI0715 00:42:28.831144 3119 log.go:181] (0xc000cccbb0) Reply frame received for 3\nI0715 00:42:28.831194 3119 log.go:181] (0xc000cccbb0) (0xc000918dc0) Create stream\nI0715 00:42:28.831207 3119 log.go:181] (0xc000cccbb0) (0xc000918dc0) Stream added, broadcasting: 5\nI0715 00:42:28.832233 3119 log.go:181] (0xc000cccbb0) Reply frame received for 5\nI0715 00:42:28.889106 3119 log.go:181] (0xc000cccbb0) Data frame received for 5\nI0715 00:42:28.889151 3119 log.go:181] (0xc000cccbb0) Data frame received for 3\nI0715 00:42:28.889180 3119 log.go:181] (0xc000b097c0) (3) Data frame handling\nI0715 00:42:28.889188 3119 log.go:181] (0xc000b097c0) (3) Data frame sent\nI0715 00:42:28.889206 3119 log.go:181] (0xc000918dc0) (5) Data frame handling\nI0715 00:42:28.889213 3119 log.go:181] (0xc000918dc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:42:28.889443 3119 log.go:181] (0xc000cccbb0) Data frame received for 5\nI0715 00:42:28.889456 3119 log.go:181] (0xc000918dc0) (5) Data frame handling\nI0715 00:42:28.889567 3119 log.go:181] (0xc000cccbb0) Data frame received for 3\nI0715 00:42:28.889590 3119 log.go:181] (0xc000b097c0) (3) Data frame handling\nI0715 00:42:28.891160 3119 log.go:181] (0xc000cccbb0) Data frame received for 1\nI0715 00:42:28.891183 3119 log.go:181] (0xc000b09220) (1) Data frame handling\nI0715 00:42:28.891200 3119 log.go:181] (0xc000b09220) (1) Data frame sent\nI0715 00:42:28.891217 3119 log.go:181] (0xc000cccbb0) (0xc000b09220) Stream removed, broadcasting: 1\nI0715 00:42:28.891315 3119 log.go:181] (0xc000cccbb0) Go away received\nI0715 00:42:28.891508 3119 log.go:181] (0xc000cccbb0) (0xc000b09220) Stream removed, broadcasting: 1\nI0715 00:42:28.891536 3119 log.go:181] (0xc000cccbb0) (0xc000b097c0) Stream removed, broadcasting: 3\nI0715 00:42:28.891547 3119 log.go:181] (0xc000cccbb0) (0xc000918dc0) Stream removed, broadcasting: 5\n" Jul 15 00:42:28.895: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:42:28.895: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:42:28.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4970 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 15 00:42:29.110: INFO: stderr: "I0715 00:42:29.040240 3137 log.go:181] (0xc0009b7080) (0xc000bb9b80) Create stream\nI0715 00:42:29.040302 3137 log.go:181] (0xc0009b7080) (0xc000bb9b80) Stream added, broadcasting: 1\nI0715 00:42:29.045671 3137 log.go:181] (0xc0009b7080) Reply frame received for 1\nI0715 00:42:29.045721 3137 log.go:181] (0xc0009b7080) (0xc000760b40) Create stream\nI0715 00:42:29.045739 3137 log.go:181] (0xc0009b7080) (0xc000760b40) Stream added, broadcasting: 3\nI0715 00:42:29.046775 3137 log.go:181] (0xc0009b7080) Reply frame received for 3\nI0715 00:42:29.046827 3137 log.go:181] (0xc0009b7080) (0xc000761e00) Create stream\nI0715 00:42:29.046847 3137 log.go:181] (0xc0009b7080) (0xc000761e00) Stream added, broadcasting: 5\nI0715 00:42:29.047931 3137 log.go:181] (0xc0009b7080) Reply frame received for 5\nI0715 00:42:29.104537 3137 log.go:181] (0xc0009b7080) Data frame received for 5\nI0715 00:42:29.104568 3137 log.go:181] (0xc000761e00) (5) Data frame handling\nI0715 00:42:29.104580 3137 log.go:181] (0xc000761e00) (5) Data frame sent\nI0715 00:42:29.104588 3137 log.go:181] (0xc0009b7080) Data frame received for 5\nI0715 00:42:29.104594 3137 log.go:181] (0xc000761e00) (5) Data frame handling\nI0715 00:42:29.104604 3137 log.go:181] (0xc0009b7080) Data frame received for 3\nI0715 00:42:29.104611 3137 log.go:181] (0xc000760b40) (3) Data frame handling\nI0715 00:42:29.104619 3137 log.go:181] (0xc000760b40) (3) Data frame sent\nI0715 00:42:29.104629 3137 log.go:181] (0xc0009b7080) Data frame received for 3\nI0715 00:42:29.104638 3137 log.go:181] (0xc000760b40) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0715 00:42:29.106035 3137 log.go:181] (0xc0009b7080) Data frame received for 1\nI0715 00:42:29.106058 3137 log.go:181] (0xc000bb9b80) (1) Data frame handling\nI0715 00:42:29.106071 3137 log.go:181] (0xc000bb9b80) (1) Data frame sent\nI0715 00:42:29.106085 3137 log.go:181] (0xc0009b7080) (0xc000bb9b80) Stream removed, broadcasting: 1\nI0715 00:42:29.106102 3137 log.go:181] (0xc0009b7080) Go away received\nI0715 00:42:29.106468 3137 log.go:181] (0xc0009b7080) (0xc000bb9b80) Stream removed, broadcasting: 1\nI0715 00:42:29.106489 3137 log.go:181] (0xc0009b7080) (0xc000760b40) Stream removed, broadcasting: 3\nI0715 00:42:29.106496 3137 log.go:181] (0xc0009b7080) (0xc000761e00) Stream removed, broadcasting: 5\n" Jul 15 00:42:29.110: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 15 00:42:29.110: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 15 00:42:29.110: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 15 00:42:59.149: INFO: Deleting all statefulset in ns statefulset-4970 Jul 15 00:42:59.153: INFO: Scaling statefulset ss to 0 Jul 15 00:42:59.162: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:42:59.163: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:42:59.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4970" for this suite. • [SLOW TEST:92.337 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":294,"completed":226,"skipped":3520,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:42:59.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:42:59.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57" in namespace "downward-api-4762" to be "Succeeded or Failed" Jul 15 00:42:59.254: INFO: Pod "downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57": Phase="Pending", Reason="", readiness=false. Elapsed: 15.475143ms Jul 15 00:43:01.258: INFO: Pod "downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019645469s Jul 15 00:43:03.262: INFO: Pod "downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023030558s STEP: Saw pod success Jul 15 00:43:03.262: INFO: Pod "downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57" satisfied condition "Succeeded or Failed" Jul 15 00:43:03.264: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57 container client-container: STEP: delete the pod Jul 15 00:43:03.332: INFO: Waiting for pod downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57 to disappear Jul 15 00:43:03.336: INFO: Pod downwardapi-volume-82d83678-597c-4f34-804c-138b079a9e57 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:43:03.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4762" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":227,"skipped":3520,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:43:03.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5371 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5371 STEP: creating replication controller externalsvc in namespace services-5371 I0715 00:43:03.626617 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5371, replica count: 2 I0715 00:43:06.677038 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:43:09.677307 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jul 15 00:43:09.753: INFO: Creating new exec pod Jul 15 00:43:13.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-5371 execpodkk2nd -- /bin/sh -x -c nslookup nodeport-service.services-5371.svc.cluster.local' Jul 15 00:43:13.994: INFO: stderr: "I0715 00:43:13.904528 3155 log.go:181] (0xc0006c31e0) (0xc000b0dae0) Create stream\nI0715 00:43:13.904583 3155 log.go:181] (0xc0006c31e0) (0xc000b0dae0) Stream added, broadcasting: 1\nI0715 00:43:13.909118 3155 log.go:181] (0xc0006c31e0) Reply frame received for 1\nI0715 00:43:13.909161 3155 log.go:181] (0xc0006c31e0) (0xc000532be0) Create stream\nI0715 00:43:13.909173 3155 log.go:181] (0xc0006c31e0) (0xc000532be0) Stream added, broadcasting: 3\nI0715 00:43:13.910287 3155 log.go:181] (0xc0006c31e0) Reply frame received for 3\nI0715 00:43:13.910317 3155 log.go:181] (0xc0006c31e0) (0xc000af7180) Create stream\nI0715 00:43:13.910325 3155 log.go:181] (0xc0006c31e0) (0xc000af7180) Stream added, broadcasting: 5\nI0715 00:43:13.911060 3155 log.go:181] (0xc0006c31e0) Reply frame received for 5\nI0715 00:43:13.977714 3155 log.go:181] (0xc0006c31e0) Data frame received for 5\nI0715 00:43:13.977745 3155 log.go:181] (0xc000af7180) (5) Data frame handling\nI0715 00:43:13.977767 3155 log.go:181] (0xc000af7180) (5) Data frame sent\n+ nslookup nodeport-service.services-5371.svc.cluster.local\nI0715 00:43:13.986170 3155 log.go:181] (0xc0006c31e0) Data frame received for 3\nI0715 00:43:13.986209 3155 log.go:181] (0xc000532be0) (3) Data frame handling\nI0715 00:43:13.986235 3155 log.go:181] (0xc000532be0) (3) Data frame sent\nI0715 00:43:13.986981 3155 log.go:181] (0xc0006c31e0) Data frame received for 3\nI0715 00:43:13.986994 3155 log.go:181] (0xc000532be0) (3) Data frame handling\nI0715 00:43:13.987000 3155 log.go:181] (0xc000532be0) (3) Data frame sent\nI0715 00:43:13.987283 3155 log.go:181] (0xc0006c31e0) Data frame received for 3\nI0715 00:43:13.987305 3155 log.go:181] (0xc000532be0) (3) Data frame handling\nI0715 00:43:13.987461 3155 log.go:181] (0xc0006c31e0) Data frame received for 5\nI0715 00:43:13.987499 3155 log.go:181] (0xc000af7180) (5) Data frame handling\nI0715 00:43:13.989396 3155 log.go:181] (0xc0006c31e0) Data frame received for 1\nI0715 00:43:13.989425 3155 log.go:181] (0xc000b0dae0) (1) Data frame handling\nI0715 00:43:13.989441 3155 log.go:181] (0xc000b0dae0) (1) Data frame sent\nI0715 00:43:13.989456 3155 log.go:181] (0xc0006c31e0) (0xc000b0dae0) Stream removed, broadcasting: 1\nI0715 00:43:13.989798 3155 log.go:181] (0xc0006c31e0) (0xc000b0dae0) Stream removed, broadcasting: 1\nI0715 00:43:13.989817 3155 log.go:181] (0xc0006c31e0) (0xc000532be0) Stream removed, broadcasting: 3\nI0715 00:43:13.989824 3155 log.go:181] (0xc0006c31e0) (0xc000af7180) Stream removed, broadcasting: 5\n" Jul 15 00:43:13.994: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5371.svc.cluster.local\tcanonical name = externalsvc.services-5371.svc.cluster.local.\nName:\texternalsvc.services-5371.svc.cluster.local\nAddress: 10.96.240.252\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5371, will wait for the garbage collector to delete the pods Jul 15 00:43:14.055: INFO: Deleting ReplicationController externalsvc took: 7.115724ms Jul 15 00:43:14.155: INFO: Terminating ReplicationController externalsvc pods took: 100.215682ms Jul 15 00:43:29.292: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:43:29.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5371" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:26.145 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":294,"completed":228,"skipped":3520,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:43:29.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:43:29.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05" in namespace "downward-api-9600" to be "Succeeded or Failed" Jul 15 00:43:29.682: INFO: Pod "downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05": Phase="Pending", Reason="", readiness=false. Elapsed: 25.166185ms Jul 15 00:43:31.686: INFO: Pod "downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028714649s Jul 15 00:43:33.691: INFO: Pod "downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033999439s STEP: Saw pod success Jul 15 00:43:33.691: INFO: Pod "downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05" satisfied condition "Succeeded or Failed" Jul 15 00:43:33.694: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05 container client-container: STEP: delete the pod Jul 15 00:43:33.781: INFO: Waiting for pod downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05 to disappear Jul 15 00:43:33.787: INFO: Pod downwardapi-volume-3385441f-fdc8-4da2-813c-f963f0c15d05 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:43:33.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9600" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":294,"completed":229,"skipped":3526,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:43:33.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:43:33.868: INFO: Creating deployment "webserver-deployment" Jul 15 00:43:33.948: INFO: Waiting for observed generation 1 Jul 15 00:43:36.005: INFO: Waiting for all required pods to come up Jul 15 00:43:36.054: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 15 00:43:46.090: INFO: Waiting for deployment "webserver-deployment" to complete Jul 15 00:43:46.096: INFO: Updating deployment "webserver-deployment" with a non-existent image Jul 15 00:43:46.103: INFO: Updating deployment webserver-deployment Jul 15 00:43:46.103: INFO: Waiting for observed generation 2 Jul 15 00:43:48.318: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 15 00:43:48.590: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 15 00:43:48.593: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 15 00:43:48.641: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 15 00:43:48.641: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 15 00:43:48.644: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jul 15 00:43:48.648: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jul 15 00:43:48.648: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jul 15 00:43:48.655: INFO: Updating deployment webserver-deployment Jul 15 00:43:48.655: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jul 15 00:43:48.880: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 15 00:43:49.160: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 15 00:43:49.318: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-623 /apis/apps/v1/namespaces/deployment-623/deployments/webserver-deployment 75ec8d3a-f606-4c1d-9938-7deaaac9c7d1 1235217 3 2020-07-15 00:43:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-15 00:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fec648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-07-15 00:43:46 +0000 UTC,LastTransitionTime:2020-07-15 00:43:33 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-15 00:43:48 +0000 UTC,LastTransitionTime:2020-07-15 00:43:48 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jul 15 00:43:49.417: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-623 /apis/apps/v1/namespaces/deployment-623/replicasets/webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 1235249 3 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 75ec8d3a-f606-4c1d-9938-7deaaac9c7d1 0xc003fecaf7 0xc003fecaf8}] [] [{kube-controller-manager Update apps/v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75ec8d3a-f606-4c1d-9938-7deaaac9c7d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fecb78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 15 00:43:49.417: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jul 15 00:43:49.417: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-623 /apis/apps/v1/namespaces/deployment-623/replicasets/webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 1235236 3 2020-07-15 00:43:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 75ec8d3a-f606-4c1d-9938-7deaaac9c7d1 0xc003fecbd7 0xc003fecbd8}] [] [{kube-controller-manager Update apps/v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75ec8d3a-f606-4c1d-9938-7deaaac9c7d1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fecc48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jul 15 00:43:49.505: INFO: Pod "webserver-deployment-6676bcd6d4-2zfqb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2zfqb webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-2zfqb 2599c82f-4393-402e-9721-d9c30ff51970 1235227 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc0038937c7 0xc0038937c8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.509: INFO: Pod "webserver-deployment-6676bcd6d4-4bwh2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4bwh2 webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-4bwh2 5a378df9-cffc-4e08-9d4a-e47d315c463f 1235237 0 2020-07-15 00:43:48 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc003893907 0xc003893908}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-15 00:43:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.509: INFO: Pod "webserver-deployment-6676bcd6d4-7wdhs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7wdhs webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-7wdhs 7a570d88-357b-44d7-8764-e855d194f7d5 1235224 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc003893b17 0xc003893b18}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.509: INFO: Pod "webserver-deployment-6676bcd6d4-8dpxh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8dpxh webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-8dpxh be6a6e28-1126-4920-b303-d0d5c21d80bc 1235206 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc003893c57 0xc003893c58}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.509: INFO: Pod "webserver-deployment-6676bcd6d4-kp5gs" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kp5gs webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-kp5gs 67001399-c0cc-40f6-b2df-64edae125ea2 1235162 0 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc003893d97 0xc003893d98}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.510: INFO: Pod "webserver-deployment-6676bcd6d4-n2w7k" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n2w7k webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-n2w7k 8156f1f4-c495-4575-a464-f0140456ea0d 1235232 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc003893f47 0xc003893f48}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.510: INFO: Pod "webserver-deployment-6676bcd6d4-nn9s6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nn9s6 webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-nn9s6 c9d71b63-c12f-468a-a2b7-210840cd35ac 1235208 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88087 0xc001c88088}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.510: INFO: Pod "webserver-deployment-6676bcd6d4-pbxg6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pbxg6 webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-pbxg6 74819413-6dc0-47ca-b7cc-560548d8d071 1235149 0 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c881c7 0xc001c881c8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-15 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.510: INFO: Pod "webserver-deployment-6676bcd6d4-pl8pb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pl8pb webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-pl8pb ee14f800-5c83-4dff-b55e-0ec84c4fa502 1235138 0 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88377 0xc001c88378}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.511: INFO: Pod "webserver-deployment-6676bcd6d4-sbjkz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sbjkz webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-sbjkz 40d101bb-2688-4143-baa5-7c832df1ff72 1235230 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88527 0xc001c88528}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.511: INFO: Pod "webserver-deployment-6676bcd6d4-sd8tm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sd8tm webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-sd8tm 723bfc5e-acfb-466a-9ba4-135e568abd86 1235135 0 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88667 0xc001c88668}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-15 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.511: INFO: Pod "webserver-deployment-6676bcd6d4-tf4mh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tf4mh webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-tf4mh 453d1d1c-e0c2-46e8-ad5e-52b71d563c72 1235223 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88817 0xc001c88818}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.511: INFO: Pod "webserver-deployment-6676bcd6d4-z65z7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z65z7 webserver-deployment-6676bcd6d4- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-6676bcd6d4-z65z7 ef6dec78-3502-4d0e-8bc2-40440655e190 1235164 0 2020-07-15 00:43:46 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 97d6c931-3eb5-47ee-9e5c-2ad7226997e2 0xc001c88957 0xc001c88958}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d6c931-3eb5-47ee-9e5c-2ad7226997e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.512: INFO: Pod "webserver-deployment-84855cf797-7dc7b" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7dc7b webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-7dc7b 0b072ab3-cb97-4248-9a08-6377f84781d3 1235211 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c88b27 0xc001c88b28}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.512: INFO: Pod "webserver-deployment-84855cf797-9hb54" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9hb54 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-9hb54 bcf951c3-b1a6-4d6e-9cec-1d827309db9b 1235073 0 2020-07-15 00:43:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c88c57 0xc001c88c58}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.93,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6cb44b58bbf8052cf844c9d16b3c057cbc8ec97b215448010041de9a5edf7710,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.512: INFO: Pod "webserver-deployment-84855cf797-bsrmn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bsrmn webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-bsrmn c78b9eb4-3ce0-434f-be8b-0bb2e1290b0b 1235219 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c88e07 0xc001c88e08}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.512: INFO: Pod "webserver-deployment-84855cf797-d64wn" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-d64wn webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-d64wn a3b50682-0d7f-409a-a0d7-6853291d3f23 1235231 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c88f37 0xc001c88f38}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.512: INFO: Pod "webserver-deployment-84855cf797-ghhxl" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ghhxl webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-ghhxl 03f5433e-168f-4bb0-bd94-d0367c5ff0ea 1235228 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89067 0xc001c89068}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.513: INFO: Pod "webserver-deployment-84855cf797-jpgq2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jpgq2 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-jpgq2 25378c49-5133-47cf-a72d-216e13a90334 1235229 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c891a7 0xc001c891a8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.513: INFO: Pod "webserver-deployment-84855cf797-lt4rq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lt4rq webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-lt4rq 8d6ba47b-6590-4670-acf4-8ebf2b7cc12d 1235076 0 2020-07-15 00:43:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c892d7 0xc001c892d8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.212\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.212,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6225364026f330b0db18a61776e57cf86b0669c59b38ed45c66b2f430ceb749f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.513: INFO: Pod "webserver-deployment-84855cf797-m7z77" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m7z77 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-m7z77 91991a16-d9b7-4469-837a-ecc9dcf73595 1235233 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89487 0xc001c89488}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.513: INFO: Pod "webserver-deployment-84855cf797-ncbl9" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ncbl9 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-ncbl9 f36cc7ad-c9d8-4059-aef6-aeb24122efc8 1235054 0 2020-07-15 00:43:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c895b7 0xc001c895b8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.91,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c0f1c8d6c8e32719e2064a31b715bb06292bcfb196153e58dd3bea3361681de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.513: INFO: Pod "webserver-deployment-84855cf797-nttgc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nttgc webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-nttgc ea1b39c8-b3f5-415d-9ffa-12fdcf051525 1235255 0 2020-07-15 00:43:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89767 0xc001c89768}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:43:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.514: INFO: Pod "webserver-deployment-84855cf797-q7jml" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-q7jml webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-q7jml de2c5196-0ca8-4b69-bf7d-0c3ab3d64a40 1235065 0 2020-07-15 00:43:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c898f7 0xc001c898f8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.211\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.211,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f18c78ca8ace0941d49c5b2b869091ab9b6ee5e693e198f8af2f8fb75fbdc931,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.514: INFO: Pod "webserver-deployment-84855cf797-rmfg4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rmfg4 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-rmfg4 d80e2394-6074-4706-b50f-fc24dee73c9b 1235220 0 2020-07-15 00:43:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89aa7 0xc001c89aa8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-07-15 00:43:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.514: INFO: Pod "webserver-deployment-84855cf797-s8bqw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-s8bqw webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-s8bqw 6fbe0b3a-29e4-4d11-86e6-878cba5c8437 1235088 0 2020-07-15 00:43:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89c37 0xc001c89c38}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.214,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46adc2543cfc61c4bbe8ac3a9f6b6588b6d2a745278c3877460d8f94c11c3e34,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.514: INFO: Pod "webserver-deployment-84855cf797-t5mqm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t5mqm webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-t5mqm 2a9a19a0-ec59-41cd-9a58-889406c5904a 1235209 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89de7 0xc001c89de8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.514: INFO: Pod "webserver-deployment-84855cf797-t94mn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-t94mn webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-t94mn c616f5bf-2294-4354-b55b-4a1ee4b83218 1235094 0 2020-07-15 00:43:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc001c89f17 0xc001c89f18}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.95\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.95,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e92e172704529af4f1dc3fcf80a954cf9d64ba542bd5be35ca72c9fdf6acf113,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.515: INFO: Pod "webserver-deployment-84855cf797-w2dxt" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-w2dxt webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-w2dxt 4f52ec01-cc09-4583-8c90-9a860ae289a8 1235207 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc00399a0c7 0xc00399a0c8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.515: INFO: Pod "webserver-deployment-84855cf797-wbcfw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wbcfw webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-wbcfw 13eb9cd6-c0f6-4ebc-a40a-c9f7d1bc0855 1235061 0 2020-07-15 00:43:33 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc00399a1f7 0xc00399a1f8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.92,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4694d70064bdc02fc0be5b6de794b16cecdf8943feaa9ed460616c257ad43cc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.515: INFO: Pod "webserver-deployment-84855cf797-wl4cr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wl4cr webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-wl4cr 3571d28e-5128-4f24-80d5-9b02340533e4 1235256 0 2020-07-15 00:43:48 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc00399a3a7 0xc00399a3a8}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-07-15 00:43:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.515: INFO: Pod "webserver-deployment-84855cf797-xrh4j" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xrh4j webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-xrh4j 0e89db2b-1a63-429e-8397-38e8be1aaa4b 1235210 0 2020-07-15 00:43:49 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc00399a537 0xc00399a538}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:43:49.515: INFO: Pod "webserver-deployment-84855cf797-zhr22" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zhr22 webserver-deployment-84855cf797- deployment-623 /api/v1/namespaces/deployment-623/pods/webserver-deployment-84855cf797-zhr22 e9d9c371-18fc-429f-b3ad-17b6e41bf2a8 1235087 0 2020-07-15 00:43:34 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1 0xc00399a667 0xc00399a668}] [] [{kube-controller-manager Update v1 2020-07-15 00:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ccb1bd0-cf6f-4156-b700-f08f85b0d7f1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-15 00:43:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lj6th,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lj6th,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lj6th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-15 00:43:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.94,StartTime:2020-07-15 00:43:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-15 00:43:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2bb4af4fd26cf72e560d3b631eac2579d14b86577bf910b861d11fe991d9ec47,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:43:49.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-623" for this suite. • [SLOW TEST:15.838 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":294,"completed":230,"skipped":3544,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:43:49.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:43:50.495: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:43:52.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:43:54.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:43:57.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:43:59.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:44:00.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:44:03.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:44:05.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:44:06.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370631, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730370630, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:44:10.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:44:10.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4644" for this suite. STEP: Destroying namespace "webhook-4644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.453 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":294,"completed":231,"skipped":3546,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:44:11.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-c2e5f2a0-049e-4276-8bad-1f67b4c17db5 STEP: Creating secret with name s-test-opt-upd-290094cc-3d26-476e-9ae2-95e5258f0513 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c2e5f2a0-049e-4276-8bad-1f67b4c17db5 STEP: Updating secret s-test-opt-upd-290094cc-3d26-476e-9ae2-95e5258f0513 STEP: Creating secret with name s-test-opt-create-e7e31fb0-7772-4a51-86ce-fc4fe739a856 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:45:30.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8117" for this suite. • [SLOW TEST:79.090 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":232,"skipped":3553,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:45:30.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7066 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7066 I0715 00:45:30.435407 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7066, replica count: 2 I0715 00:45:33.485844 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:45:36.486102 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:45:36.486: INFO: Creating new exec pod Jul 15 00:45:41.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-7066 execpodjwwdm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 15 00:45:41.739: INFO: stderr: "I0715 00:45:41.646238 3173 log.go:181] (0xc000a2d760) (0xc000d943c0) Create stream\nI0715 00:45:41.646285 3173 log.go:181] (0xc000a2d760) (0xc000d943c0) Stream added, broadcasting: 1\nI0715 00:45:41.651738 3173 log.go:181] (0xc000a2d760) Reply frame received for 1\nI0715 00:45:41.651774 3173 log.go:181] (0xc000a2d760) (0xc000c21b80) Create stream\nI0715 00:45:41.651783 3173 log.go:181] (0xc000a2d760) (0xc000c21b80) Stream added, broadcasting: 3\nI0715 00:45:41.652641 3173 log.go:181] (0xc000a2d760) Reply frame received for 3\nI0715 00:45:41.652676 3173 log.go:181] (0xc000a2d760) (0xc000c061e0) Create stream\nI0715 00:45:41.652690 3173 log.go:181] (0xc000a2d760) (0xc000c061e0) Stream added, broadcasting: 5\nI0715 00:45:41.653741 3173 log.go:181] (0xc000a2d760) Reply frame received for 5\nI0715 00:45:41.732500 3173 log.go:181] (0xc000a2d760) Data frame received for 5\nI0715 00:45:41.732604 3173 log.go:181] (0xc000c061e0) (5) Data frame handling\nI0715 00:45:41.732664 3173 log.go:181] (0xc000c061e0) (5) Data frame sent\nI0715 00:45:41.732684 3173 log.go:181] (0xc000a2d760) Data frame received for 5\nI0715 00:45:41.732693 3173 log.go:181] (0xc000c061e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0715 00:45:41.732716 3173 log.go:181] (0xc000c061e0) (5) Data frame sent\nI0715 00:45:41.733166 3173 log.go:181] (0xc000a2d760) Data frame received for 5\nI0715 00:45:41.733198 3173 log.go:181] (0xc000c061e0) (5) Data frame handling\nI0715 00:45:41.733416 3173 log.go:181] (0xc000a2d760) Data frame received for 3\nI0715 00:45:41.733447 3173 log.go:181] (0xc000c21b80) (3) Data frame handling\nI0715 00:45:41.734994 3173 log.go:181] (0xc000a2d760) Data frame received for 1\nI0715 00:45:41.735011 3173 log.go:181] (0xc000d943c0) (1) Data frame handling\nI0715 00:45:41.735021 3173 log.go:181] (0xc000d943c0) (1) Data frame sent\nI0715 00:45:41.735030 3173 log.go:181] (0xc000a2d760) (0xc000d943c0) Stream removed, broadcasting: 1\nI0715 00:45:41.735043 3173 log.go:181] (0xc000a2d760) Go away received\nI0715 00:45:41.735428 3173 log.go:181] (0xc000a2d760) (0xc000d943c0) Stream removed, broadcasting: 1\nI0715 00:45:41.735454 3173 log.go:181] (0xc000a2d760) (0xc000c21b80) Stream removed, broadcasting: 3\nI0715 00:45:41.735462 3173 log.go:181] (0xc000a2d760) (0xc000c061e0) Stream removed, broadcasting: 5\n" Jul 15 00:45:41.739: INFO: stdout: "" Jul 15 00:45:41.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-7066 execpodjwwdm -- /bin/sh -x -c nc -zv -t -w 2 10.109.210.237 80' Jul 15 00:45:41.920: INFO: stderr: "I0715 00:45:41.848104 3187 log.go:181] (0xc000924210) (0xc000c88280) Create stream\nI0715 00:45:41.848141 3187 log.go:181] (0xc000924210) (0xc000c88280) Stream added, broadcasting: 1\nI0715 00:45:41.849876 3187 log.go:181] (0xc000924210) Reply frame received for 1\nI0715 00:45:41.849895 3187 log.go:181] (0xc000924210) (0xc0005b3860) Create stream\nI0715 00:45:41.849901 3187 log.go:181] (0xc000924210) (0xc0005b3860) Stream added, broadcasting: 3\nI0715 00:45:41.850704 3187 log.go:181] (0xc000924210) Reply frame received for 3\nI0715 00:45:41.850745 3187 log.go:181] (0xc000924210) (0xc0007240a0) Create stream\nI0715 00:45:41.850759 3187 log.go:181] (0xc000924210) (0xc0007240a0) Stream added, broadcasting: 5\nI0715 00:45:41.851449 3187 log.go:181] (0xc000924210) Reply frame received for 5\nI0715 00:45:41.914026 3187 log.go:181] (0xc000924210) Data frame received for 5\nI0715 00:45:41.914085 3187 log.go:181] (0xc000924210) Data frame received for 3\nI0715 00:45:41.914135 3187 log.go:181] (0xc0005b3860) (3) Data frame handling\nI0715 00:45:41.914162 3187 log.go:181] (0xc0007240a0) (5) Data frame handling\nI0715 00:45:41.914175 3187 log.go:181] (0xc0007240a0) (5) Data frame sent\nI0715 00:45:41.914186 3187 log.go:181] (0xc000924210) Data frame received for 5\nI0715 00:45:41.914195 3187 log.go:181] (0xc0007240a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.210.237 80\nConnection to 10.109.210.237 80 port [tcp/http] succeeded!\nI0715 00:45:41.915250 3187 log.go:181] (0xc000924210) Data frame received for 1\nI0715 00:45:41.915277 3187 log.go:181] (0xc000c88280) (1) Data frame handling\nI0715 00:45:41.915294 3187 log.go:181] (0xc000c88280) (1) Data frame sent\nI0715 00:45:41.915316 3187 log.go:181] (0xc000924210) (0xc000c88280) Stream removed, broadcasting: 1\nI0715 00:45:41.915338 3187 log.go:181] (0xc000924210) Go away received\nI0715 00:45:41.915867 3187 log.go:181] (0xc000924210) (0xc000c88280) Stream removed, broadcasting: 1\nI0715 00:45:41.915888 3187 log.go:181] (0xc000924210) (0xc0005b3860) Stream removed, broadcasting: 3\nI0715 00:45:41.915895 3187 log.go:181] (0xc000924210) (0xc0007240a0) Stream removed, broadcasting: 5\n" Jul 15 00:45:41.921: INFO: stdout: "" Jul 15 00:45:41.921: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:45:41.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7066" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:11.779 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":294,"completed":233,"skipped":3562,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:45:41.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:45:42.088: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 15 00:45:44.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5046 create -f -' Jul 15 00:45:48.063: INFO: stderr: "" Jul 15 00:45:48.063: INFO: stdout: "e2e-test-crd-publish-openapi-9797-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 15 00:45:48.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5046 delete e2e-test-crd-publish-openapi-9797-crds test-cr' Jul 15 00:45:48.467: INFO: stderr: "" Jul 15 00:45:48.467: INFO: stdout: "e2e-test-crd-publish-openapi-9797-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 15 00:45:48.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5046 apply -f -' Jul 15 00:45:48.784: INFO: stderr: "" Jul 15 00:45:48.784: INFO: stdout: "e2e-test-crd-publish-openapi-9797-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 15 00:45:48.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5046 delete e2e-test-crd-publish-openapi-9797-crds test-cr' Jul 15 00:45:48.889: INFO: stderr: "" Jul 15 00:45:48.889: INFO: stdout: "e2e-test-crd-publish-openapi-9797-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 15 00:45:48.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9797-crds' Jul 15 00:45:49.155: INFO: stderr: "" Jul 15 00:45:49.155: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9797-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:45:52.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5046" for this suite. • [SLOW TEST:10.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":294,"completed":234,"skipped":3593,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:45:52.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:45:52.177: INFO: Waiting up to 5m0s for pod "busybox-user-65534-24b58a07-251c-4e79-98f4-94a0efb77ed8" in namespace "security-context-test-3470" to be "Succeeded or Failed" Jul 15 00:45:52.182: INFO: Pod "busybox-user-65534-24b58a07-251c-4e79-98f4-94a0efb77ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181924ms Jul 15 00:45:54.195: INFO: Pod "busybox-user-65534-24b58a07-251c-4e79-98f4-94a0efb77ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017613557s Jul 15 00:45:56.207: INFO: Pod "busybox-user-65534-24b58a07-251c-4e79-98f4-94a0efb77ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02992126s Jul 15 00:45:56.207: INFO: Pod "busybox-user-65534-24b58a07-251c-4e79-98f4-94a0efb77ed8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:45:56.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3470" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":235,"skipped":3671,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:45:56.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-8ssj STEP: Creating a pod to test atomic-volume-subpath Jul 15 00:45:56.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8ssj" in namespace "subpath-4753" to be "Succeeded or Failed" Jul 15 00:45:56.296: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.205899ms Jul 15 00:45:58.429: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149960255s Jul 15 00:46:00.434: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 4.154228821s Jul 15 00:46:02.438: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 6.158094593s Jul 15 00:46:04.442: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 8.162400041s Jul 15 00:46:06.446: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 10.166536428s Jul 15 00:46:08.450: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 12.170752334s Jul 15 00:46:10.461: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 14.181588794s Jul 15 00:46:12.465: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 16.185746882s Jul 15 00:46:14.469: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 18.189961623s Jul 15 00:46:16.474: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 20.19409896s Jul 15 00:46:18.478: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Running", Reason="", readiness=true. Elapsed: 22.198453423s Jul 15 00:46:20.510: INFO: Pod "pod-subpath-test-configmap-8ssj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.230860788s STEP: Saw pod success Jul 15 00:46:20.510: INFO: Pod "pod-subpath-test-configmap-8ssj" satisfied condition "Succeeded or Failed" Jul 15 00:46:20.542: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-8ssj container test-container-subpath-configmap-8ssj: STEP: delete the pod Jul 15 00:46:20.760: INFO: Waiting for pod pod-subpath-test-configmap-8ssj to disappear Jul 15 00:46:20.781: INFO: Pod pod-subpath-test-configmap-8ssj no longer exists STEP: Deleting pod pod-subpath-test-configmap-8ssj Jul 15 00:46:20.781: INFO: Deleting pod "pod-subpath-test-configmap-8ssj" in namespace "subpath-4753" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:46:20.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4753" for this suite. • [SLOW TEST:24.591 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":294,"completed":236,"skipped":3671,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:46:20.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-663c2a40-4aa1-4922-8b5d-8306d7f582d5 STEP: Creating a pod to test consume secrets Jul 15 00:46:20.892: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a" in namespace "projected-6483" to be "Succeeded or Failed" Jul 15 00:46:20.911: INFO: Pod "pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.739124ms Jul 15 00:46:22.962: INFO: Pod "pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070735927s Jul 15 00:46:24.967: INFO: Pod "pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074883315s STEP: Saw pod success Jul 15 00:46:24.967: INFO: Pod "pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a" satisfied condition "Succeeded or Failed" Jul 15 00:46:24.970: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a container projected-secret-volume-test: STEP: delete the pod Jul 15 00:46:25.118: INFO: Waiting for pod pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a to disappear Jul 15 00:46:25.134: INFO: Pod pod-projected-secrets-fb25245f-9850-4184-90fa-9dc407282f1a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:46:25.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6483" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":237,"skipped":3684,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:46:25.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 15 00:46:25.284: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236171 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:46:25.284: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236171 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 15 00:46:35.293: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236220 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:46:35.293: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236220 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 15 00:46:45.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236250 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:46:45.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236250 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 15 00:46:55.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236280 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:46:55.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-a 71e593a9-4512-44cc-8334-50f66f344254 1236280 0 2020-07-15 00:46:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-15 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 15 00:47:05.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-b fddd21b3-5eeb-42b5-9aa8-bc5f6b94acde 1236310 0 2020-07-15 00:47:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-15 00:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:47:05.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-b fddd21b3-5eeb-42b5-9aa8-bc5f6b94acde 1236310 0 2020-07-15 00:47:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-15 00:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 15 00:47:15.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-b fddd21b3-5eeb-42b5-9aa8-bc5f6b94acde 1236340 0 2020-07-15 00:47:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-15 00:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 15 00:47:15.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2144 /api/v1/namespaces/watch-2144/configmaps/e2e-watch-test-configmap-b fddd21b3-5eeb-42b5-9aa8-bc5f6b94acde 1236340 0 2020-07-15 00:47:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-15 00:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:47:25.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2144" for this suite. • [SLOW TEST:60.190 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":294,"completed":238,"skipped":3705,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:47:25.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 15 00:47:25.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 15 00:47:25.440: INFO: Waiting for terminating namespaces to be deleted... Jul 15 00:47:25.442: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 15 00:47:25.447: INFO: kindnet-qt4jk from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:47:25.447: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:47:25.447: INFO: kube-proxy-xb9q4 from kube-system started at 2020-07-10 10:30:16 +0000 UTC (1 container statuses recorded) Jul 15 00:47:25.447: INFO: Container kube-proxy ready: true, restart count 0 Jul 15 00:47:25.447: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 15 00:47:25.452: INFO: kindnet-gkkxx from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:47:25.452: INFO: Container kindnet-cni ready: true, restart count 0 Jul 15 00:47:25.452: INFO: kube-proxy-s596l from kube-system started at 2020-07-10 10:30:17 +0000 UTC (1 container statuses recorded) Jul 15 00:47:25.452: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-87b3f129-b0a3-4bf2-894f-e7cc33841b19 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-87b3f129-b0a3-4bf2-894f-e7cc33841b19 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-87b3f129-b0a3-4bf2-894f-e7cc33841b19 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:52:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-993" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.314 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":294,"completed":239,"skipped":3706,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:52:33.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jul 15 00:52:33.733: INFO: namespace kubectl-6899 Jul 15 00:52:33.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6899' Jul 15 00:52:34.049: INFO: stderr: "" Jul 15 00:52:34.049: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jul 15 00:52:35.054: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:52:35.054: INFO: Found 0 / 1 Jul 15 00:52:36.069: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:52:36.070: INFO: Found 0 / 1 Jul 15 00:52:37.054: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:52:37.054: INFO: Found 0 / 1 Jul 15 00:52:38.058: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:52:38.058: INFO: Found 1 / 1 Jul 15 00:52:38.058: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 15 00:52:38.061: INFO: Selector matched 1 pods for map[app:agnhost] Jul 15 00:52:38.061: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 15 00:52:38.061: INFO: wait on agnhost-primary startup in kubectl-6899 Jul 15 00:52:38.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config logs agnhost-primary-69d2f agnhost-primary --namespace=kubectl-6899' Jul 15 00:52:38.408: INFO: stderr: "" Jul 15 00:52:38.408: INFO: stdout: "Paused\n" STEP: exposing RC Jul 15 00:52:38.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6899' Jul 15 00:52:38.591: INFO: stderr: "" Jul 15 00:52:38.591: INFO: stdout: "service/rm2 exposed\n" Jul 15 00:52:38.603: INFO: Service rm2 in namespace kubectl-6899 found. STEP: exposing service Jul 15 00:52:40.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6899' Jul 15 00:52:40.787: INFO: stderr: "" Jul 15 00:52:40.787: INFO: stdout: "service/rm3 exposed\n" Jul 15 00:52:40.831: INFO: Service rm3 in namespace kubectl-6899 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:52:42.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6899" for this suite. • [SLOW TEST:9.198 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1241 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":294,"completed":240,"skipped":3726,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:52:42.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4246 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 15 00:52:42.987: INFO: Found 0 stateful pods, waiting for 3 Jul 15 00:52:52.992: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:52:52.992: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:52:52.992: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 15 00:53:02.992: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:53:02.993: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:53:02.993: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 15 00:53:03.020: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 15 00:53:13.084: INFO: Updating stateful set ss2 Jul 15 00:53:13.091: INFO: Waiting for Pod statefulset-4246/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jul 15 00:53:23.633: INFO: Found 2 stateful pods, waiting for 3 Jul 15 00:53:33.639: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:53:33.639: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 15 00:53:33.639: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 15 00:53:33.665: INFO: Updating stateful set ss2 Jul 15 00:53:33.708: INFO: Waiting for Pod statefulset-4246/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 15 00:53:43.736: INFO: Updating stateful set ss2 Jul 15 00:53:43.789: INFO: Waiting for StatefulSet statefulset-4246/ss2 to complete update Jul 15 00:53:43.789: INFO: Waiting for Pod statefulset-4246/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 15 00:53:53.798: INFO: Deleting all statefulset in ns statefulset-4246 Jul 15 00:53:53.801: INFO: Scaling statefulset ss2 to 0 Jul 15 00:54:23.836: INFO: Waiting for statefulset status.replicas updated to 0 Jul 15 00:54:23.839: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:23.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4246" for this suite. • [SLOW TEST:101.017 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":294,"completed":241,"skipped":3727,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:23.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:54:24.850: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:54:27.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371264, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371264, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371264, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:54:30.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:54:30.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5726-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:32.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8368" for this suite. STEP: Destroying namespace "webhook-8368-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.595 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":294,"completed":242,"skipped":3747,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:32.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 15 00:54:38.702: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5908 PodName:pod-sharedvolume-a3ebea14-734b-4776-8cd5-cc8153ad9386 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:54:38.702: INFO: >>> kubeConfig: /root/.kube/config I0715 00:54:38.740443 7 log.go:181] (0xc002b13290) (0xc0033a8500) Create stream I0715 00:54:38.740485 7 log.go:181] (0xc002b13290) (0xc0033a8500) Stream added, broadcasting: 1 I0715 00:54:38.742485 7 log.go:181] (0xc002b13290) Reply frame received for 1 I0715 00:54:38.742527 7 log.go:181] (0xc002b13290) (0xc0028af900) Create stream I0715 00:54:38.742540 7 log.go:181] (0xc002b13290) (0xc0028af900) Stream added, broadcasting: 3 I0715 00:54:38.743514 7 log.go:181] (0xc002b13290) Reply frame received for 3 I0715 00:54:38.743543 7 log.go:181] (0xc002b13290) (0xc0033a85a0) Create stream I0715 00:54:38.743554 7 log.go:181] (0xc002b13290) (0xc0033a85a0) Stream added, broadcasting: 5 I0715 00:54:38.744353 7 log.go:181] (0xc002b13290) Reply frame received for 5 I0715 00:54:38.793707 7 log.go:181] (0xc002b13290) Data frame received for 5 I0715 00:54:38.793739 7 log.go:181] (0xc0033a85a0) (5) Data frame handling I0715 00:54:38.793761 7 log.go:181] (0xc002b13290) Data frame received for 3 I0715 00:54:38.793777 7 log.go:181] (0xc0028af900) (3) Data frame handling I0715 00:54:38.793793 7 log.go:181] (0xc0028af900) (3) Data frame sent I0715 00:54:38.793809 7 log.go:181] (0xc002b13290) Data frame received for 3 I0715 00:54:38.793824 7 log.go:181] (0xc0028af900) (3) Data frame handling I0715 00:54:38.795480 7 log.go:181] (0xc002b13290) Data frame received for 1 I0715 00:54:38.795506 7 log.go:181] (0xc0033a8500) (1) Data frame handling I0715 00:54:38.795518 7 log.go:181] (0xc0033a8500) (1) Data frame sent I0715 00:54:38.795529 7 log.go:181] (0xc002b13290) (0xc0033a8500) Stream removed, broadcasting: 1 I0715 00:54:38.795602 7 log.go:181] (0xc002b13290) Go away received I0715 00:54:38.795669 7 log.go:181] (0xc002b13290) (0xc0033a8500) Stream removed, broadcasting: 1 I0715 00:54:38.795720 7 log.go:181] (0xc002b13290) (0xc0028af900) Stream removed, broadcasting: 3 I0715 00:54:38.795744 7 log.go:181] (0xc002b13290) (0xc0033a85a0) Stream removed, broadcasting: 5 Jul 15 00:54:38.795: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:38.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5908" for this suite. • [SLOW TEST:6.347 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":294,"completed":243,"skipped":3758,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:38.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-65f7d46a-f12b-4419-bfd8-735050485f5e STEP: Creating a pod to test consume secrets Jul 15 00:54:38.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48" in namespace "projected-3398" to be "Succeeded or Failed" Jul 15 00:54:38.919: INFO: Pod "pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48": Phase="Pending", Reason="", readiness=false. Elapsed: 10.99523ms Jul 15 00:54:40.952: INFO: Pod "pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0437866s Jul 15 00:54:42.956: INFO: Pod "pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048304712s STEP: Saw pod success Jul 15 00:54:42.956: INFO: Pod "pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48" satisfied condition "Succeeded or Failed" Jul 15 00:54:42.960: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48 container projected-secret-volume-test: STEP: delete the pod Jul 15 00:54:42.992: INFO: Waiting for pod pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48 to disappear Jul 15 00:54:43.010: INFO: Pod pod-projected-secrets-6ad126a7-47a3-4353-a376-d75d61758a48 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:43.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3398" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":244,"skipped":3761,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:43.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 15 00:54:43.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1187' Jul 15 00:54:43.187: INFO: stderr: "" Jul 15 00:54:43.187: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 Jul 15 00:54:43.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1187' Jul 15 00:54:46.133: INFO: stderr: "" Jul 15 00:54:46.133: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:46.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1187" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":294,"completed":245,"skipped":3790,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:46.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jul 15 00:54:46.327: INFO: Created pod &Pod{ObjectMeta:{dns-9011 dns-9011 /api/v1/namespaces/dns-9011/pods/dns-9011 6222de4a-19a4-466b-a442-51fc75b83913 1238199 0 2020-07-15 00:54:46 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-07-15 00:54:46 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-trvwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-trvwb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-trvwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 15 00:54:46.337: INFO: The status of Pod dns-9011 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:54:48.356: INFO: The status of Pod dns-9011 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:54:50.342: INFO: The status of Pod dns-9011 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jul 15 00:54:50.342: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9011 PodName:dns-9011 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:54:50.342: INFO: >>> kubeConfig: /root/.kube/config I0715 00:54:50.368955 7 log.go:181] (0xc00206cd10) (0xc002dc95e0) Create stream I0715 00:54:50.368988 7 log.go:181] (0xc00206cd10) (0xc002dc95e0) Stream added, broadcasting: 1 I0715 00:54:50.371030 7 log.go:181] (0xc00206cd10) Reply frame received for 1 I0715 00:54:50.371073 7 log.go:181] (0xc00206cd10) (0xc002525680) Create stream I0715 00:54:50.371086 7 log.go:181] (0xc00206cd10) (0xc002525680) Stream added, broadcasting: 3 I0715 00:54:50.371931 7 log.go:181] (0xc00206cd10) Reply frame received for 3 I0715 00:54:50.371973 7 log.go:181] (0xc00206cd10) (0xc002525720) Create stream I0715 00:54:50.371988 7 log.go:181] (0xc00206cd10) (0xc002525720) Stream added, broadcasting: 5 I0715 00:54:50.373005 7 log.go:181] (0xc00206cd10) Reply frame received for 5 I0715 00:54:50.457494 7 log.go:181] (0xc00206cd10) Data frame received for 3 I0715 00:54:50.457523 7 log.go:181] (0xc002525680) (3) Data frame handling I0715 00:54:50.457548 7 log.go:181] (0xc002525680) (3) Data frame sent I0715 00:54:50.458364 7 log.go:181] (0xc00206cd10) Data frame received for 5 I0715 00:54:50.458392 7 log.go:181] (0xc002525720) (5) Data frame handling I0715 00:54:50.458417 7 log.go:181] (0xc00206cd10) Data frame received for 3 I0715 00:54:50.458430 7 log.go:181] (0xc002525680) (3) Data frame handling I0715 00:54:50.459931 7 log.go:181] (0xc00206cd10) Data frame received for 1 I0715 00:54:50.459951 7 log.go:181] (0xc002dc95e0) (1) Data frame handling I0715 00:54:50.459963 7 log.go:181] (0xc002dc95e0) (1) Data frame sent I0715 00:54:50.459996 7 log.go:181] (0xc00206cd10) (0xc002dc95e0) Stream removed, broadcasting: 1 I0715 00:54:50.460131 7 log.go:181] (0xc00206cd10) (0xc002dc95e0) Stream removed, broadcasting: 1 I0715 00:54:50.460167 7 log.go:181] (0xc00206cd10) (0xc002525680) Stream removed, broadcasting: 3 I0715 00:54:50.460188 7 log.go:181] (0xc00206cd10) (0xc002525720) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... I0715 00:54:50.460263 7 log.go:181] (0xc00206cd10) Go away received Jul 15 00:54:50.460: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9011 PodName:dns-9011 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:54:50.460: INFO: >>> kubeConfig: /root/.kube/config I0715 00:54:50.493457 7 log.go:181] (0xc00206d340) (0xc002dc9860) Create stream I0715 00:54:50.493483 7 log.go:181] (0xc00206d340) (0xc002dc9860) Stream added, broadcasting: 1 I0715 00:54:50.496234 7 log.go:181] (0xc00206d340) Reply frame received for 1 I0715 00:54:50.496289 7 log.go:181] (0xc00206d340) (0xc000e459a0) Create stream I0715 00:54:50.496304 7 log.go:181] (0xc00206d340) (0xc000e459a0) Stream added, broadcasting: 3 I0715 00:54:50.497514 7 log.go:181] (0xc00206d340) Reply frame received for 3 I0715 00:54:50.497562 7 log.go:181] (0xc00206d340) (0xc002dc9900) Create stream I0715 00:54:50.497576 7 log.go:181] (0xc00206d340) (0xc002dc9900) Stream added, broadcasting: 5 I0715 00:54:50.498512 7 log.go:181] (0xc00206d340) Reply frame received for 5 I0715 00:54:50.562323 7 log.go:181] (0xc00206d340) Data frame received for 3 I0715 00:54:50.562354 7 log.go:181] (0xc000e459a0) (3) Data frame handling I0715 00:54:50.562375 7 log.go:181] (0xc000e459a0) (3) Data frame sent I0715 00:54:50.563397 7 log.go:181] (0xc00206d340) Data frame received for 3 I0715 00:54:50.563430 7 log.go:181] (0xc000e459a0) (3) Data frame handling I0715 00:54:50.563833 7 log.go:181] (0xc00206d340) Data frame received for 5 I0715 00:54:50.563865 7 log.go:181] (0xc002dc9900) (5) Data frame handling I0715 00:54:50.565676 7 log.go:181] (0xc00206d340) Data frame received for 1 I0715 00:54:50.565766 7 log.go:181] (0xc002dc9860) (1) Data frame handling I0715 00:54:50.565849 7 log.go:181] (0xc002dc9860) (1) Data frame sent I0715 00:54:50.565882 7 log.go:181] (0xc00206d340) (0xc002dc9860) Stream removed, broadcasting: 1 I0715 00:54:50.565908 7 log.go:181] (0xc00206d340) Go away received I0715 00:54:50.566029 7 log.go:181] (0xc00206d340) (0xc002dc9860) Stream removed, broadcasting: 1 I0715 00:54:50.566060 7 log.go:181] (0xc00206d340) (0xc000e459a0) Stream removed, broadcasting: 3 I0715 00:54:50.566082 7 log.go:181] (0xc00206d340) (0xc002dc9900) Stream removed, broadcasting: 5 Jul 15 00:54:50.566: INFO: Deleting pod dns-9011... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9011" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":294,"completed":246,"skipped":3799,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:50.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-c7a80015-b975-4e2a-8a57-9a57af531a54 STEP: Creating a pod to test consume secrets Jul 15 00:54:50.704: INFO: Waiting up to 5m0s for pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782" in namespace "secrets-4019" to be "Succeeded or Failed" Jul 15 00:54:51.072: INFO: Pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782": Phase="Pending", Reason="", readiness=false. Elapsed: 368.345733ms Jul 15 00:54:53.083: INFO: Pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379375579s Jul 15 00:54:55.088: INFO: Pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38372278s Jul 15 00:54:57.092: INFO: Pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.38778806s STEP: Saw pod success Jul 15 00:54:57.092: INFO: Pod "pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782" satisfied condition "Succeeded or Failed" Jul 15 00:54:57.095: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782 container secret-volume-test: STEP: delete the pod Jul 15 00:54:57.157: INFO: Waiting for pod pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782 to disappear Jul 15 00:54:57.252: INFO: Pod pod-secrets-5a3ef7ab-4bcc-439f-a78b-1bbd8dc4e782 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:54:57.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4019" for this suite. • [SLOW TEST:6.677 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":247,"skipped":3808,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:54:57.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:54:57.902: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:54:59.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:55:01.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371297, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:55:05.002: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:55:15.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1889" for this suite. STEP: Destroying namespace "webhook-1889-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.989 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":294,"completed":248,"skipped":3827,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:55:15.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 15 00:55:15.327: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:55:29.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2090" for this suite. • [SLOW TEST:13.868 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":294,"completed":249,"skipped":3828,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:55:29.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:55:29.877: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:55:31.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371329, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371329, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371329, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371329, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:55:34.915: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:55:35.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8207" for this suite. STEP: Destroying namespace "webhook-8207-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.096 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":294,"completed":250,"skipped":3848,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:55:35.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4108 STEP: creating service affinity-nodeport-transition in namespace services-4108 STEP: creating replication controller affinity-nodeport-transition in namespace services-4108 I0715 00:55:35.389178 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4108, replica count: 3 I0715 00:55:38.439651 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0715 00:55:41.439921 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 15 00:55:41.450: INFO: Creating new exec pod Jul 15 00:55:46.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jul 15 00:55:49.653: INFO: stderr: "I0715 00:55:49.574818 3419 log.go:181] (0xc0008bcbb0) (0xc0004281e0) Create stream\nI0715 00:55:49.574879 3419 log.go:181] (0xc0008bcbb0) (0xc0004281e0) Stream added, broadcasting: 1\nI0715 00:55:49.578808 3419 log.go:181] (0xc0008bcbb0) Reply frame received for 1\nI0715 00:55:49.578854 3419 log.go:181] (0xc0008bcbb0) (0xc000132640) Create stream\nI0715 00:55:49.578880 3419 log.go:181] (0xc0008bcbb0) (0xc000132640) Stream added, broadcasting: 3\nI0715 00:55:49.579984 3419 log.go:181] (0xc0008bcbb0) Reply frame received for 3\nI0715 00:55:49.580037 3419 log.go:181] (0xc0008bcbb0) (0xc000428320) Create stream\nI0715 00:55:49.580061 3419 log.go:181] (0xc0008bcbb0) (0xc000428320) Stream added, broadcasting: 5\nI0715 00:55:49.581050 3419 log.go:181] (0xc0008bcbb0) Reply frame received for 5\nI0715 00:55:49.643818 3419 log.go:181] (0xc0008bcbb0) Data frame received for 5\nI0715 00:55:49.643857 3419 log.go:181] (0xc000428320) (5) Data frame handling\nI0715 00:55:49.643900 3419 log.go:181] (0xc000428320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0715 00:55:49.644456 3419 log.go:181] (0xc0008bcbb0) Data frame received for 5\nI0715 00:55:49.644494 3419 log.go:181] (0xc000428320) (5) Data frame handling\nI0715 00:55:49.644526 3419 log.go:181] (0xc000428320) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0715 00:55:49.645073 3419 log.go:181] (0xc0008bcbb0) Data frame received for 5\nI0715 00:55:49.645113 3419 log.go:181] (0xc000428320) (5) Data frame handling\nI0715 00:55:49.645133 3419 log.go:181] (0xc0008bcbb0) Data frame received for 3\nI0715 00:55:49.645147 3419 log.go:181] (0xc000132640) (3) Data frame handling\nI0715 00:55:49.647190 3419 log.go:181] (0xc0008bcbb0) Data frame received for 1\nI0715 00:55:49.647210 3419 log.go:181] (0xc0004281e0) (1) Data frame handling\nI0715 00:55:49.647230 3419 log.go:181] (0xc0004281e0) (1) Data frame sent\nI0715 00:55:49.647249 3419 log.go:181] (0xc0008bcbb0) (0xc0004281e0) Stream removed, broadcasting: 1\nI0715 00:55:49.647306 3419 log.go:181] (0xc0008bcbb0) Go away received\nI0715 00:55:49.647745 3419 log.go:181] (0xc0008bcbb0) (0xc0004281e0) Stream removed, broadcasting: 1\nI0715 00:55:49.647765 3419 log.go:181] (0xc0008bcbb0) (0xc000132640) Stream removed, broadcasting: 3\nI0715 00:55:49.647777 3419 log.go:181] (0xc0008bcbb0) (0xc000428320) Stream removed, broadcasting: 5\n" Jul 15 00:55:49.653: INFO: stdout: "" Jul 15 00:55:49.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c nc -zv -t -w 2 10.110.91.189 80' Jul 15 00:55:49.866: INFO: stderr: "I0715 00:55:49.797120 3438 log.go:181] (0xc00054f550) (0xc0009f7360) Create stream\nI0715 00:55:49.797190 3438 log.go:181] (0xc00054f550) (0xc0009f7360) Stream added, broadcasting: 1\nI0715 00:55:49.799642 3438 log.go:181] (0xc00054f550) Reply frame received for 1\nI0715 00:55:49.799702 3438 log.go:181] (0xc00054f550) (0xc000b585a0) Create stream\nI0715 00:55:49.799722 3438 log.go:181] (0xc00054f550) (0xc000b585a0) Stream added, broadcasting: 3\nI0715 00:55:49.800514 3438 log.go:181] (0xc00054f550) Reply frame received for 3\nI0715 00:55:49.800539 3438 log.go:181] (0xc00054f550) (0xc0009f7ae0) Create stream\nI0715 00:55:49.800545 3438 log.go:181] (0xc00054f550) (0xc0009f7ae0) Stream added, broadcasting: 5\nI0715 00:55:49.801444 3438 log.go:181] (0xc00054f550) Reply frame received for 5\nI0715 00:55:49.859518 3438 log.go:181] (0xc00054f550) Data frame received for 3\nI0715 00:55:49.859584 3438 log.go:181] (0xc000b585a0) (3) Data frame handling\nI0715 00:55:49.859617 3438 log.go:181] (0xc00054f550) Data frame received for 5\nI0715 00:55:49.859635 3438 log.go:181] (0xc0009f7ae0) (5) Data frame handling\nI0715 00:55:49.859654 3438 log.go:181] (0xc0009f7ae0) (5) Data frame sent\nI0715 00:55:49.859691 3438 log.go:181] (0xc00054f550) Data frame received for 5\nI0715 00:55:49.859706 3438 log.go:181] (0xc0009f7ae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.91.189 80\nConnection to 10.110.91.189 80 port [tcp/http] succeeded!\nI0715 00:55:49.861183 3438 log.go:181] (0xc00054f550) Data frame received for 1\nI0715 00:55:49.861208 3438 log.go:181] (0xc0009f7360) (1) Data frame handling\nI0715 00:55:49.861220 3438 log.go:181] (0xc0009f7360) (1) Data frame sent\nI0715 00:55:49.861340 3438 log.go:181] (0xc00054f550) (0xc0009f7360) Stream removed, broadcasting: 1\nI0715 00:55:49.861675 3438 log.go:181] (0xc00054f550) (0xc0009f7360) Stream removed, broadcasting: 1\nI0715 00:55:49.861690 3438 log.go:181] (0xc00054f550) (0xc000b585a0) Stream removed, broadcasting: 3\nI0715 00:55:49.861699 3438 log.go:181] (0xc00054f550) (0xc0009f7ae0) Stream removed, broadcasting: 5\n" Jul 15 00:55:49.866: INFO: stdout: "" Jul 15 00:55:49.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30766' Jul 15 00:55:50.067: INFO: stderr: "I0715 00:55:49.995746 3456 log.go:181] (0xc00015ef20) (0xc00032c460) Create stream\nI0715 00:55:49.995798 3456 log.go:181] (0xc00015ef20) (0xc00032c460) Stream added, broadcasting: 1\nI0715 00:55:50.001213 3456 log.go:181] (0xc00015ef20) Reply frame received for 1\nI0715 00:55:50.001244 3456 log.go:181] (0xc00015ef20) (0xc000b17220) Create stream\nI0715 00:55:50.001254 3456 log.go:181] (0xc00015ef20) (0xc000b17220) Stream added, broadcasting: 3\nI0715 00:55:50.002428 3456 log.go:181] (0xc00015ef20) Reply frame received for 3\nI0715 00:55:50.002498 3456 log.go:181] (0xc00015ef20) (0xc00059cbe0) Create stream\nI0715 00:55:50.002536 3456 log.go:181] (0xc00015ef20) (0xc00059cbe0) Stream added, broadcasting: 5\nI0715 00:55:50.003414 3456 log.go:181] (0xc00015ef20) Reply frame received for 5\nI0715 00:55:50.058918 3456 log.go:181] (0xc00015ef20) Data frame received for 5\nI0715 00:55:50.058956 3456 log.go:181] (0xc00059cbe0) (5) Data frame handling\nI0715 00:55:50.058978 3456 log.go:181] (0xc00059cbe0) (5) Data frame sent\nI0715 00:55:50.058990 3456 log.go:181] (0xc00015ef20) Data frame received for 5\nI0715 00:55:50.059000 3456 log.go:181] (0xc00059cbe0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30766\nConnection to 172.18.0.14 30766 port [tcp/30766] succeeded!\nI0715 00:55:50.059049 3456 log.go:181] (0xc00015ef20) Data frame received for 3\nI0715 00:55:50.059076 3456 log.go:181] (0xc000b17220) (3) Data frame handling\nI0715 00:55:50.060903 3456 log.go:181] (0xc00015ef20) Data frame received for 1\nI0715 00:55:50.060930 3456 log.go:181] (0xc00032c460) (1) Data frame handling\nI0715 00:55:50.060955 3456 log.go:181] (0xc00032c460) (1) Data frame sent\nI0715 00:55:50.060970 3456 log.go:181] (0xc00015ef20) (0xc00032c460) Stream removed, broadcasting: 1\nI0715 00:55:50.060987 3456 log.go:181] (0xc00015ef20) Go away received\nI0715 00:55:50.061409 3456 log.go:181] (0xc00015ef20) (0xc00032c460) Stream removed, broadcasting: 1\nI0715 00:55:50.061434 3456 log.go:181] (0xc00015ef20) (0xc000b17220) Stream removed, broadcasting: 3\nI0715 00:55:50.061448 3456 log.go:181] (0xc00015ef20) (0xc00059cbe0) Stream removed, broadcasting: 5\n" Jul 15 00:55:50.067: INFO: stdout: "" Jul 15 00:55:50.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30766' Jul 15 00:55:50.282: INFO: stderr: "I0715 00:55:50.215477 3475 log.go:181] (0xc000e26d10) (0xc000e92460) Create stream\nI0715 00:55:50.215527 3475 log.go:181] (0xc000e26d10) (0xc000e92460) Stream added, broadcasting: 1\nI0715 00:55:50.220822 3475 log.go:181] (0xc000e26d10) Reply frame received for 1\nI0715 00:55:50.220874 3475 log.go:181] (0xc000e26d10) (0xc000a89180) Create stream\nI0715 00:55:50.220888 3475 log.go:181] (0xc000e26d10) (0xc000a89180) Stream added, broadcasting: 3\nI0715 00:55:50.221896 3475 log.go:181] (0xc000e26d10) Reply frame received for 3\nI0715 00:55:50.221964 3475 log.go:181] (0xc000e26d10) (0xc00083c960) Create stream\nI0715 00:55:50.221997 3475 log.go:181] (0xc000e26d10) (0xc00083c960) Stream added, broadcasting: 5\nI0715 00:55:50.222973 3475 log.go:181] (0xc000e26d10) Reply frame received for 5\nI0715 00:55:50.275050 3475 log.go:181] (0xc000e26d10) Data frame received for 5\nI0715 00:55:50.275080 3475 log.go:181] (0xc00083c960) (5) Data frame handling\nI0715 00:55:50.275090 3475 log.go:181] (0xc00083c960) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 30766\nConnection to 172.18.0.11 30766 port [tcp/30766] succeeded!\nI0715 00:55:50.275100 3475 log.go:181] (0xc000e26d10) Data frame received for 3\nI0715 00:55:50.275131 3475 log.go:181] (0xc000a89180) (3) Data frame handling\nI0715 00:55:50.275350 3475 log.go:181] (0xc000e26d10) Data frame received for 5\nI0715 00:55:50.275370 3475 log.go:181] (0xc00083c960) (5) Data frame handling\nI0715 00:55:50.276804 3475 log.go:181] (0xc000e26d10) Data frame received for 1\nI0715 00:55:50.276834 3475 log.go:181] (0xc000e92460) (1) Data frame handling\nI0715 00:55:50.276851 3475 log.go:181] (0xc000e92460) (1) Data frame sent\nI0715 00:55:50.276895 3475 log.go:181] (0xc000e26d10) (0xc000e92460) Stream removed, broadcasting: 1\nI0715 00:55:50.277192 3475 log.go:181] (0xc000e26d10) Go away received\nI0715 00:55:50.277249 3475 log.go:181] (0xc000e26d10) (0xc000e92460) Stream removed, broadcasting: 1\nI0715 00:55:50.277273 3475 log.go:181] (0xc000e26d10) (0xc000a89180) Stream removed, broadcasting: 3\nI0715 00:55:50.277282 3475 log.go:181] (0xc000e26d10) (0xc00083c960) Stream removed, broadcasting: 5\n" Jul 15 00:55:50.282: INFO: stdout: "" Jul 15 00:55:50.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30766/ ; done' Jul 15 00:55:50.625: INFO: stderr: "I0715 00:55:50.437020 3493 log.go:181] (0xc000c2b550) (0xc00093b900) Create stream\nI0715 00:55:50.437093 3493 log.go:181] (0xc000c2b550) (0xc00093b900) Stream added, broadcasting: 1\nI0715 00:55:50.441867 3493 log.go:181] (0xc000c2b550) Reply frame received for 1\nI0715 00:55:50.441909 3493 log.go:181] (0xc000c2b550) (0xc0006972c0) Create stream\nI0715 00:55:50.441923 3493 log.go:181] (0xc000c2b550) (0xc0006972c0) Stream added, broadcasting: 3\nI0715 00:55:50.442865 3493 log.go:181] (0xc000c2b550) Reply frame received for 3\nI0715 00:55:50.442907 3493 log.go:181] (0xc000c2b550) (0xc000697540) Create stream\nI0715 00:55:50.442919 3493 log.go:181] (0xc000c2b550) (0xc000697540) Stream added, broadcasting: 5\nI0715 00:55:50.443989 3493 log.go:181] (0xc000c2b550) Reply frame received for 5\nI0715 00:55:50.507808 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.507850 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.507864 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.507893 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.507904 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.507929 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.513011 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.513033 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.513052 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.513867 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.513910 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.513933 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.513967 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.513988 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.514013 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.522255 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.522282 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.522295 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.523282 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.523306 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.523323 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.523347 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.523358 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.523384 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.531308 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.531339 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.531372 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.531861 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.531886 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.531895 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.531904 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.531908 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.531912 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.538858 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.538879 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.538902 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.539698 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.539722 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.539735 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.539747 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.539763 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.539780 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.545677 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.545698 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.545712 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.546065 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.546079 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.546090 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.546156 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.546172 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.546184 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.549943 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.549957 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.549968 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.550246 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.550271 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.550283 3493 log.go:181] (0xc000697540) (5) Data frame sent\nI0715 00:55:50.550294 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.550310 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.550326 3493 log.go:181] (0xc000c2b550) Data frame received for 3\n+ echo\n+ curl -q -sI0715 00:55:50.550352 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.550361 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\n --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.550379 3493 log.go:181] (0xc000697540) (5) Data frame sent\nI0715 00:55:50.555124 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.555138 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.555154 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.555711 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.555725 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.555730 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.555739 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.555748 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.555752 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.560053 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.560076 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.560103 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.560895 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.560911 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.560923 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0715 00:55:50.560992 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.561006 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.561015 3493 log.go:181] (0xc000697540) (5) Data frame sent\n http://172.18.0.14:30766/\nI0715 00:55:50.561227 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.561239 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.561247 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.568506 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.568529 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.568554 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.569063 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.569086 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.569097 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.569116 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.569125 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.569135 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.575271 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.575287 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.575296 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.575775 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.575790 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.575804 3493 log.go:181] (0xc000697540) (5) Data frame sent\nI0715 00:55:50.575810 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.575816 3493 log.go:181] (0xc000697540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.575831 3493 log.go:181] (0xc000697540) (5) Data frame sent\nI0715 00:55:50.575896 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.575923 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.575951 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.582495 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.582512 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.582523 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.583326 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.583390 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.583412 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.583455 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.583479 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.583503 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.590113 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.590135 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.590146 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.590663 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.590687 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.590699 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.590717 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.590735 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.590752 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.596756 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.596808 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.596858 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.597295 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.597327 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.597368 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.597386 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.597404 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.597415 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.602356 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.602377 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.602389 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.603157 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.603182 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.603206 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.603235 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.603260 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.603274 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.610440 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.610474 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.610505 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.614714 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.614749 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.614766 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.614787 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.614799 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.614821 3493 log.go:181] (0xc000697540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.618400 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.618427 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.618449 3493 log.go:181] (0xc0006972c0) (3) Data frame sent\nI0715 00:55:50.619099 3493 log.go:181] (0xc000c2b550) Data frame received for 5\nI0715 00:55:50.619128 3493 log.go:181] (0xc000697540) (5) Data frame handling\nI0715 00:55:50.619165 3493 log.go:181] (0xc000c2b550) Data frame received for 3\nI0715 00:55:50.619185 3493 log.go:181] (0xc0006972c0) (3) Data frame handling\nI0715 00:55:50.620661 3493 log.go:181] (0xc000c2b550) Data frame received for 1\nI0715 00:55:50.620692 3493 log.go:181] (0xc00093b900) (1) Data frame handling\nI0715 00:55:50.620704 3493 log.go:181] (0xc00093b900) (1) Data frame sent\nI0715 00:55:50.620797 3493 log.go:181] (0xc000c2b550) (0xc00093b900) Stream removed, broadcasting: 1\nI0715 00:55:50.620832 3493 log.go:181] (0xc000c2b550) Go away received\nI0715 00:55:50.621384 3493 log.go:181] (0xc000c2b550) (0xc00093b900) Stream removed, broadcasting: 1\nI0715 00:55:50.621415 3493 log.go:181] (0xc000c2b550) (0xc0006972c0) Stream removed, broadcasting: 3\nI0715 00:55:50.621433 3493 log.go:181] (0xc000c2b550) (0xc000697540) Stream removed, broadcasting: 5\n" Jul 15 00:55:50.626: INFO: stdout: "\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-r7tlw\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-r7tlw\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-96nx4\naffinity-nodeport-transition-r7tlw\naffinity-nodeport-transition-8jgzj" Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-r7tlw Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-r7tlw Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-96nx4 Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-r7tlw Jul 15 00:55:50.626: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config exec --namespace=services-4108 execpod-affinitygmbdc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:30766/ ; done' Jul 15 00:55:50.926: INFO: stderr: "I0715 00:55:50.767185 3511 log.go:181] (0xc0008b5290) (0xc000b5f680) Create stream\nI0715 00:55:50.767244 3511 log.go:181] (0xc0008b5290) (0xc000b5f680) Stream added, broadcasting: 1\nI0715 00:55:50.773329 3511 log.go:181] (0xc0008b5290) Reply frame received for 1\nI0715 00:55:50.773381 3511 log.go:181] (0xc0008b5290) (0xc0009b8640) Create stream\nI0715 00:55:50.773399 3511 log.go:181] (0xc0008b5290) (0xc0009b8640) Stream added, broadcasting: 3\nI0715 00:55:50.774433 3511 log.go:181] (0xc0008b5290) Reply frame received for 3\nI0715 00:55:50.774472 3511 log.go:181] (0xc0008b5290) (0xc0003806e0) Create stream\nI0715 00:55:50.774484 3511 log.go:181] (0xc0008b5290) (0xc0003806e0) Stream added, broadcasting: 5\nI0715 00:55:50.775158 3511 log.go:181] (0xc0008b5290) Reply frame received for 5\nI0715 00:55:50.830460 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.830488 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.830496 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.830521 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.830539 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.830550 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.834510 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.834530 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.834544 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.835223 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.835250 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.835282 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.835308 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.835336 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.835360 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.839666 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.839684 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.839693 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.840347 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.840364 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.840382 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.840420 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.840438 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.840458 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.843593 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.843608 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.843624 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.844410 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.844448 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.844462 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.844477 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.844501 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.844517 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.848130 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.848147 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.848156 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.848521 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.848541 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.848558 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0715 00:55:50.848653 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.848665 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.848687 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.848805 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.848821 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.848834 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\n 2 http://172.18.0.14:30766/\nI0715 00:55:50.852870 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.852896 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.852921 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.853555 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.853594 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.853607 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.853628 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.853638 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.853649 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.857338 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.857351 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.857356 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.858335 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.858350 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.858355 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.858387 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.858411 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.858431 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.862865 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.862897 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.862929 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.863338 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.863348 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.863356 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.863441 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.863458 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.863468 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.870108 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.870137 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.870164 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.870754 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.870778 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.870790 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.870808 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.870818 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.870832 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.876411 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.876431 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.876459 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.876917 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.876941 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.876958 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.876993 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.877003 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.877037 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.877069 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.877085 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.877129 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.880636 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.880656 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.880675 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.881079 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.881097 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.881107 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.881117 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.881142 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.881170 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.881178 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.881184 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.881206 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.885636 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.885651 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.885669 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.886214 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.886232 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.886239 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\nI0715 00:55:50.886248 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.886254 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.886260 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.891598 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.891611 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.891619 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.892235 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.892255 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.892284 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.892295 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.892307 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.892328 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.898416 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.898437 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.898463 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.899193 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.899210 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.899223 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.899240 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.899249 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.899258 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.904679 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.904712 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.904874 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.905290 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.905302 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.905308 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.905353 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.905372 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.905388 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.910840 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.910858 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.910874 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.911558 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.911570 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.911576 3511 log.go:181] (0xc0003806e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:30766/\nI0715 00:55:50.911663 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.911672 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.911678 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.917630 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.917665 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.917691 3511 log.go:181] (0xc0009b8640) (3) Data frame sent\nI0715 00:55:50.918501 3511 log.go:181] (0xc0008b5290) Data frame received for 3\nI0715 00:55:50.918522 3511 log.go:181] (0xc0009b8640) (3) Data frame handling\nI0715 00:55:50.918542 3511 log.go:181] (0xc0008b5290) Data frame received for 5\nI0715 00:55:50.918553 3511 log.go:181] (0xc0003806e0) (5) Data frame handling\nI0715 00:55:50.920354 3511 log.go:181] (0xc0008b5290) Data frame received for 1\nI0715 00:55:50.920372 3511 log.go:181] (0xc000b5f680) (1) Data frame handling\nI0715 00:55:50.920391 3511 log.go:181] (0xc000b5f680) (1) Data frame sent\nI0715 00:55:50.920406 3511 log.go:181] (0xc0008b5290) (0xc000b5f680) Stream removed, broadcasting: 1\nI0715 00:55:50.920495 3511 log.go:181] (0xc0008b5290) Go away received\nI0715 00:55:50.920859 3511 log.go:181] (0xc0008b5290) (0xc000b5f680) Stream removed, broadcasting: 1\nI0715 00:55:50.920875 3511 log.go:181] (0xc0008b5290) (0xc0009b8640) Stream removed, broadcasting: 3\nI0715 00:55:50.920883 3511 log.go:181] (0xc0008b5290) (0xc0003806e0) Stream removed, broadcasting: 5\n" Jul 15 00:55:50.927: INFO: stdout: "\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj\naffinity-nodeport-transition-8jgzj" Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.927: INFO: Received response from host: affinity-nodeport-transition-8jgzj Jul 15 00:55:50.928: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4108, will wait for the garbage collector to delete the pods Jul 15 00:55:51.493: INFO: Deleting ReplicationController affinity-nodeport-transition took: 467.695959ms Jul 15 00:55:51.794: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 300.314357ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:56:09.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4108" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:34.034 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":251,"skipped":3862,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:56:09.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jul 15 00:56:09.393: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jul 15 00:56:09.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:09.758: INFO: stderr: "" Jul 15 00:56:09.758: INFO: stdout: "service/agnhost-replica created\n" Jul 15 00:56:09.758: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jul 15 00:56:09.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:10.078: INFO: stderr: "" Jul 15 00:56:10.078: INFO: stdout: "service/agnhost-primary created\n" Jul 15 00:56:10.078: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 15 00:56:10.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:10.445: INFO: stderr: "" Jul 15 00:56:10.445: INFO: stdout: "service/frontend created\n" Jul 15 00:56:10.445: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jul 15 00:56:10.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:10.721: INFO: stderr: "" Jul 15 00:56:10.721: INFO: stdout: "deployment.apps/frontend created\n" Jul 15 00:56:10.721: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 15 00:56:10.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:11.053: INFO: stderr: "" Jul 15 00:56:11.053: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jul 15 00:56:11.054: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 15 00:56:11.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7961' Jul 15 00:56:11.374: INFO: stderr: "" Jul 15 00:56:11.374: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jul 15 00:56:11.374: INFO: Waiting for all frontend pods to be Running. Jul 15 00:56:21.425: INFO: Waiting for frontend to serve content. Jul 15 00:56:21.435: INFO: Trying to add a new entry to the guestbook. Jul 15 00:56:21.444: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 15 00:56:21.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:21.666: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:21.666: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jul 15 00:56:21.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:21.886: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:21.886: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jul 15 00:56:21.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:22.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:22.011: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 15 00:56:22.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:22.124: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:22.124: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 15 00:56:22.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:22.221: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:22.221: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jul 15 00:56:22.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7961' Jul 15 00:56:22.935: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 15 00:56:22.935: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:56:22.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7961" for this suite. • [SLOW TEST:13.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:350 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":294,"completed":252,"skipped":3915,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:56:23.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:255 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 15 00:56:23.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7582' Jul 15 00:56:23.554: INFO: stderr: "" Jul 15 00:56:23.554: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jul 15 00:56:23.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7582' Jul 15 00:56:23.966: INFO: stderr: "" Jul 15 00:56:23.966: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-15T00:56:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-15T00:56:23Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7582\",\n \"resourceVersion\": \"1238988\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7582/pods/e2e-test-httpd-pod\",\n \"uid\": \"74655fb9-61a3-4e0d-9a68-4641c90ed382\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9hmb5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9hmb5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9hmb5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-15T00:56:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Jul 15 00:56:23.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7582' Jul 15 00:56:24.439: INFO: stderr: "W0715 00:56:24.041610 3778 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Jul 15 00:56:24.439: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jul 15 00:56:24.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7582' Jul 15 00:56:27.739: INFO: stderr: "" Jul 15 00:56:27.739: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:56:27.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7582" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":294,"completed":253,"skipped":3982,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:56:27.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:56:28.062: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c8c4449e-7f7d-46a8-b095-76ed59735f84", Controller:(*bool)(0xc00205b1a2), BlockOwnerDeletion:(*bool)(0xc00205b1a3)}} Jul 15 00:56:28.093: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"30e13f82-c526-49f1-bda3-03a989abcf27", Controller:(*bool)(0xc00387f852), BlockOwnerDeletion:(*bool)(0xc00387f853)}} Jul 15 00:56:28.151: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ccd5be74-2fa3-4cc1-9c66-227256eff507", Controller:(*bool)(0xc003e58532), BlockOwnerDeletion:(*bool)(0xc003e58533)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:56:33.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3623" for this suite. • [SLOW TEST:5.467 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":294,"completed":254,"skipped":3998,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:56:33.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:57:33.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2489" for this suite. • [SLOW TEST:60.088 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":294,"completed":255,"skipped":4012,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:57:33.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:57:34.105: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:57:36.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:57:38.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371454, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:57:41.169: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:57:41.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3842" for this suite. STEP: Destroying namespace "webhook-3842-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.570 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":294,"completed":256,"skipped":4055,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:57:41.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7896 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 15 00:57:41.922: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 15 00:57:42.020: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:57:44.023: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 15 00:57:46.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:48.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:50.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:52.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:54.029: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:56.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:57:58.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:58:00.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:58:02.024: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 15 00:58:04.025: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 15 00:58:04.030: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 15 00:58:08.075: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.130 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7896 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:58:08.075: INFO: >>> kubeConfig: /root/.kube/config I0715 00:58:08.102868 7 log.go:181] (0xc0043082c0) (0xc0028343c0) Create stream I0715 00:58:08.102894 7 log.go:181] (0xc0043082c0) (0xc0028343c0) Stream added, broadcasting: 1 I0715 00:58:08.106727 7 log.go:181] (0xc0043082c0) Reply frame received for 1 I0715 00:58:08.106791 7 log.go:181] (0xc0043082c0) (0xc0014685a0) Create stream I0715 00:58:08.106812 7 log.go:181] (0xc0043082c0) (0xc0014685a0) Stream added, broadcasting: 3 I0715 00:58:08.107640 7 log.go:181] (0xc0043082c0) Reply frame received for 3 I0715 00:58:08.107663 7 log.go:181] (0xc0043082c0) (0xc0016dfa40) Create stream I0715 00:58:08.107672 7 log.go:181] (0xc0043082c0) (0xc0016dfa40) Stream added, broadcasting: 5 I0715 00:58:08.108686 7 log.go:181] (0xc0043082c0) Reply frame received for 5 I0715 00:58:09.159527 7 log.go:181] (0xc0043082c0) Data frame received for 3 I0715 00:58:09.159570 7 log.go:181] (0xc0014685a0) (3) Data frame handling I0715 00:58:09.159596 7 log.go:181] (0xc0014685a0) (3) Data frame sent I0715 00:58:09.159881 7 log.go:181] (0xc0043082c0) Data frame received for 5 I0715 00:58:09.159924 7 log.go:181] (0xc0043082c0) Data frame received for 3 I0715 00:58:09.159964 7 log.go:181] (0xc0014685a0) (3) Data frame handling I0715 00:58:09.159982 7 log.go:181] (0xc0016dfa40) (5) Data frame handling I0715 00:58:09.163163 7 log.go:181] (0xc0043082c0) Data frame received for 1 I0715 00:58:09.163187 7 log.go:181] (0xc0028343c0) (1) Data frame handling I0715 00:58:09.163199 7 log.go:181] (0xc0028343c0) (1) Data frame sent I0715 00:58:09.163212 7 log.go:181] (0xc0043082c0) (0xc0028343c0) Stream removed, broadcasting: 1 I0715 00:58:09.163303 7 log.go:181] (0xc0043082c0) (0xc0028343c0) Stream removed, broadcasting: 1 I0715 00:58:09.163318 7 log.go:181] (0xc0043082c0) (0xc0014685a0) Stream removed, broadcasting: 3 I0715 00:58:09.163379 7 log.go:181] (0xc0043082c0) Go away received I0715 00:58:09.163538 7 log.go:181] (0xc0043082c0) (0xc0016dfa40) Stream removed, broadcasting: 5 Jul 15 00:58:09.163: INFO: Found all expected endpoints: [netserver-0] Jul 15 00:58:09.167: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.251 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7896 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 15 00:58:09.167: INFO: >>> kubeConfig: /root/.kube/config I0715 00:58:09.205568 7 log.go:181] (0xc00414a6e0) (0xc002747360) Create stream I0715 00:58:09.205593 7 log.go:181] (0xc00414a6e0) (0xc002747360) Stream added, broadcasting: 1 I0715 00:58:09.207392 7 log.go:181] (0xc00414a6e0) Reply frame received for 1 I0715 00:58:09.207416 7 log.go:181] (0xc00414a6e0) (0xc0024b0000) Create stream I0715 00:58:09.207425 7 log.go:181] (0xc00414a6e0) (0xc0024b0000) Stream added, broadcasting: 3 I0715 00:58:09.208127 7 log.go:181] (0xc00414a6e0) Reply frame received for 3 I0715 00:58:09.208155 7 log.go:181] (0xc00414a6e0) (0xc001469c20) Create stream I0715 00:58:09.208166 7 log.go:181] (0xc00414a6e0) (0xc001469c20) Stream added, broadcasting: 5 I0715 00:58:09.208883 7 log.go:181] (0xc00414a6e0) Reply frame received for 5 I0715 00:58:10.292812 7 log.go:181] (0xc00414a6e0) Data frame received for 3 I0715 00:58:10.292863 7 log.go:181] (0xc0024b0000) (3) Data frame handling I0715 00:58:10.292889 7 log.go:181] (0xc0024b0000) (3) Data frame sent I0715 00:58:10.293200 7 log.go:181] (0xc00414a6e0) Data frame received for 5 I0715 00:58:10.293240 7 log.go:181] (0xc001469c20) (5) Data frame handling I0715 00:58:10.293395 7 log.go:181] (0xc00414a6e0) Data frame received for 3 I0715 00:58:10.293414 7 log.go:181] (0xc0024b0000) (3) Data frame handling I0715 00:58:10.295103 7 log.go:181] (0xc00414a6e0) Data frame received for 1 I0715 00:58:10.295121 7 log.go:181] (0xc002747360) (1) Data frame handling I0715 00:58:10.295142 7 log.go:181] (0xc002747360) (1) Data frame sent I0715 00:58:10.295161 7 log.go:181] (0xc00414a6e0) (0xc002747360) Stream removed, broadcasting: 1 I0715 00:58:10.295230 7 log.go:181] (0xc00414a6e0) (0xc002747360) Stream removed, broadcasting: 1 I0715 00:58:10.295239 7 log.go:181] (0xc00414a6e0) (0xc0024b0000) Stream removed, broadcasting: 3 I0715 00:58:10.295328 7 log.go:181] (0xc00414a6e0) Go away received I0715 00:58:10.295376 7 log.go:181] (0xc00414a6e0) (0xc001469c20) Stream removed, broadcasting: 5 Jul 15 00:58:10.295: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:10.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7896" for this suite. • [SLOW TEST:28.439 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":257,"skipped":4055,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:10.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 15 00:58:11.040: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 15 00:58:13.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-869fb7d886\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 15 00:58:15.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371491, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-869fb7d886\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:58:18.147: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 00:58:18.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:19.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5096" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.229 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":294,"completed":258,"skipped":4065,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:19.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 15 00:58:24.730: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:24.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-604" for this suite. • [SLOW TEST:5.254 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":259,"skipped":4096,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:24.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 15 00:58:24.893: INFO: Waiting up to 5m0s for pod "pod-ca7e89f3-079a-4794-b6bd-91ef80872b79" in namespace "emptydir-1686" to be "Succeeded or Failed" Jul 15 00:58:24.927: INFO: Pod "pod-ca7e89f3-079a-4794-b6bd-91ef80872b79": Phase="Pending", Reason="", readiness=false. Elapsed: 33.785554ms Jul 15 00:58:26.952: INFO: Pod "pod-ca7e89f3-079a-4794-b6bd-91ef80872b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058102098s Jul 15 00:58:28.957: INFO: Pod "pod-ca7e89f3-079a-4794-b6bd-91ef80872b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063773134s STEP: Saw pod success Jul 15 00:58:28.957: INFO: Pod "pod-ca7e89f3-079a-4794-b6bd-91ef80872b79" satisfied condition "Succeeded or Failed" Jul 15 00:58:28.959: INFO: Trying to get logs from node latest-worker pod pod-ca7e89f3-079a-4794-b6bd-91ef80872b79 container test-container: STEP: delete the pod Jul 15 00:58:29.001: INFO: Waiting for pod pod-ca7e89f3-079a-4794-b6bd-91ef80872b79 to disappear Jul 15 00:58:29.032: INFO: Pod pod-ca7e89f3-079a-4794-b6bd-91ef80872b79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:29.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1686" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":260,"skipped":4116,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:29.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:29.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2860" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":294,"completed":261,"skipped":4156,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:29.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 00:58:30.599: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 00:58:32.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371510, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371510, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371510, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371510, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 00:58:35.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:35.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8286" for this suite. STEP: Destroying namespace "webhook-8286-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.624 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":294,"completed":262,"skipped":4176,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:35.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jul 15 00:58:36.025: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jul 15 00:58:36.029: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 15 00:58:36.029: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jul 15 00:58:36.037: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 15 00:58:36.037: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jul 15 00:58:36.090: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jul 15 00:58:36.090: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jul 15 00:58:43.549: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:43.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9973" for this suite. • [SLOW TEST:7.644 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":294,"completed":263,"skipped":4197,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:43.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 00:58:43.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619" in namespace "projected-7486" to be "Succeeded or Failed" Jul 15 00:58:43.823: INFO: Pod "downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619": Phase="Pending", Reason="", readiness=false. Elapsed: 17.726998ms Jul 15 00:58:45.847: INFO: Pod "downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041859629s Jul 15 00:58:48.086: INFO: Pod "downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.280659205s STEP: Saw pod success Jul 15 00:58:48.086: INFO: Pod "downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619" satisfied condition "Succeeded or Failed" Jul 15 00:58:48.213: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619 container client-container: STEP: delete the pod Jul 15 00:58:48.464: INFO: Waiting for pod downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619 to disappear Jul 15 00:58:48.480: INFO: Pod downwardapi-volume-bc536b40-8ab1-4591-a745-fed265555619 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:48.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7486" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":294,"completed":264,"skipped":4202,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:48.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:54.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3160" for this suite. • [SLOW TEST:6.187 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":265,"skipped":4213,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:54.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4e87cd03-c58d-4863-b547-0a475a15f2bd STEP: Creating a pod to test consume secrets Jul 15 00:58:54.904: INFO: Waiting up to 5m0s for pod "pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70" in namespace "secrets-3230" to be "Succeeded or Failed" Jul 15 00:58:54.911: INFO: Pod "pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70": Phase="Pending", Reason="", readiness=false. Elapsed: 7.473471ms Jul 15 00:58:56.943: INFO: Pod "pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03949101s Jul 15 00:58:58.947: INFO: Pod "pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043329815s STEP: Saw pod success Jul 15 00:58:58.947: INFO: Pod "pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70" satisfied condition "Succeeded or Failed" Jul 15 00:58:58.950: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70 container secret-volume-test: STEP: delete the pod Jul 15 00:58:59.033: INFO: Waiting for pod pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70 to disappear Jul 15 00:58:59.214: INFO: Pod pod-secrets-7998d094-8079-46e1-bd1d-cf5bdc9e2a70 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:58:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3230" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":266,"skipped":4222,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:58:59.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 15 00:58:59.500: INFO: Waiting up to 5m0s for pod "downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434" in namespace "downward-api-7345" to be "Succeeded or Failed" Jul 15 00:58:59.516: INFO: Pod "downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434": Phase="Pending", Reason="", readiness=false. Elapsed: 15.88955ms Jul 15 00:59:01.521: INFO: Pod "downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020551001s Jul 15 00:59:03.686: INFO: Pod "downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186023487s STEP: Saw pod success Jul 15 00:59:03.686: INFO: Pod "downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434" satisfied condition "Succeeded or Failed" Jul 15 00:59:03.689: INFO: Trying to get logs from node latest-worker pod downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434 container dapi-container: STEP: delete the pod Jul 15 00:59:03.731: INFO: Waiting for pod downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434 to disappear Jul 15 00:59:03.737: INFO: Pod downward-api-cbc589a4-7b73-45a7-8769-ab879f1ab434 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:59:03.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7345" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":294,"completed":267,"skipped":4233,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:59:03.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-09e7dae6-2a84-40df-b708-96405c404ee7 STEP: Creating a pod to test consume configMaps Jul 15 00:59:03.857: INFO: Waiting up to 5m0s for pod "pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632" in namespace "configmap-4475" to be "Succeeded or Failed" Jul 15 00:59:03.863: INFO: Pod "pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632": Phase="Pending", Reason="", readiness=false. Elapsed: 5.830944ms Jul 15 00:59:05.867: INFO: Pod "pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010071732s Jul 15 00:59:07.876: INFO: Pod "pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019009596s STEP: Saw pod success Jul 15 00:59:07.876: INFO: Pod "pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632" satisfied condition "Succeeded or Failed" Jul 15 00:59:07.923: INFO: Trying to get logs from node latest-worker pod pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632 container configmap-volume-test: STEP: delete the pod Jul 15 00:59:08.243: INFO: Waiting for pod pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632 to disappear Jul 15 00:59:08.247: INFO: Pod pod-configmaps-71d05f66-65b9-4a1c-a488-48c057057632 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:59:08.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4475" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":268,"skipped":4273,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:59:08.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-14415810-9986-4414-8793-d6da54b4c91c STEP: Creating a pod to test consume secrets Jul 15 00:59:08.425: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4" in namespace "projected-5338" to be "Succeeded or Failed" Jul 15 00:59:08.441: INFO: Pod "pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.194757ms Jul 15 00:59:10.446: INFO: Pod "pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020584918s Jul 15 00:59:12.448: INFO: Pod "pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023235528s STEP: Saw pod success Jul 15 00:59:12.448: INFO: Pod "pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4" satisfied condition "Succeeded or Failed" Jul 15 00:59:12.450: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4 container projected-secret-volume-test: STEP: delete the pod Jul 15 00:59:12.522: INFO: Waiting for pod pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4 to disappear Jul 15 00:59:12.554: INFO: Pod pod-projected-secrets-720545dc-b882-4c30-b2f8-3ad6b4aefad4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 00:59:12.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5338" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":269,"skipped":4381,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 00:59:12.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jul 15 01:01:13.245: INFO: Successfully updated pod "var-expansion-81176e24-b35f-4ffc-961d-1ff798668397" STEP: waiting for pod running STEP: deleting the pod gracefully Jul 15 01:01:15.321: INFO: Deleting pod "var-expansion-81176e24-b35f-4ffc-961d-1ff798668397" in namespace "var-expansion-6483" Jul 15 01:01:15.327: INFO: Wait up to 5m0s for pod "var-expansion-81176e24-b35f-4ffc-961d-1ff798668397" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:01:59.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6483" for this suite. • [SLOW TEST:166.801 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":294,"completed":270,"skipped":4384,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:01:59.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:01:59.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6895" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":294,"completed":271,"skipped":4397,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:01:59.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 01:01:59.701: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 15 01:01:59.712: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:01:59.760: INFO: Number of nodes with available pods: 0 Jul 15 01:01:59.760: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:00.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:00.769: INFO: Number of nodes with available pods: 0 Jul 15 01:02:00.769: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:01.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:01.770: INFO: Number of nodes with available pods: 0 Jul 15 01:02:01.770: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:02.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:02.769: INFO: Number of nodes with available pods: 0 Jul 15 01:02:02.769: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:03.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:03.768: INFO: Number of nodes with available pods: 0 Jul 15 01:02:03.768: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:04.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:04.767: INFO: Number of nodes with available pods: 2 Jul 15 01:02:04.767: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 15 01:02:04.874: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:04.874: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:04.911: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:05.919: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:05.920: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:05.925: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:06.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:06.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:06.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:07.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:07.917: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:07.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:08.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:08.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:08.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:08.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:09.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:09.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:09.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:09.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:10.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:10.917: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:10.917: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:10.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:11.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:11.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:11.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:11.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:12.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:12.917: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:12.917: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:12.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:13.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:13.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:13.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:13.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:14.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:14.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:14.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:14.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:15.915: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:15.915: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:15.915: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:15.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:16.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:16.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:16.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:16.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:17.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:17.917: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:17.917: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:17.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:18.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:18.916: INFO: Wrong image for pod: daemon-set-dtqkr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:18.916: INFO: Pod daemon-set-dtqkr is not available Jul 15 01:02:18.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:19.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:19.917: INFO: Pod daemon-set-c58sm is not available Jul 15 01:02:19.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:20.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:20.917: INFO: Pod daemon-set-c58sm is not available Jul 15 01:02:20.924: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:21.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:21.917: INFO: Pod daemon-set-c58sm is not available Jul 15 01:02:21.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:22.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:22.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:23.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:23.917: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:23.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:24.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:24.916: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:24.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:25.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:25.917: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:25.920: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:26.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:26.916: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:26.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:27.916: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:27.916: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:27.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:28.917: INFO: Wrong image for pod: daemon-set-9kp9h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Jul 15 01:02:28.917: INFO: Pod daemon-set-9kp9h is not available Jul 15 01:02:28.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:29.917: INFO: Pod daemon-set-wmplx is not available Jul 15 01:02:29.921: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 15 01:02:29.925: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:29.928: INFO: Number of nodes with available pods: 1 Jul 15 01:02:29.928: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:30.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:30.936: INFO: Number of nodes with available pods: 1 Jul 15 01:02:30.936: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:31.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:31.963: INFO: Number of nodes with available pods: 1 Jul 15 01:02:31.963: INFO: Node latest-worker is running more than one daemon pod Jul 15 01:02:32.933: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 15 01:02:32.936: INFO: Number of nodes with available pods: 2 Jul 15 01:02:32.936: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9526, will wait for the garbage collector to delete the pods Jul 15 01:02:33.027: INFO: Deleting DaemonSet.extensions daemon-set took: 25.149005ms Jul 15 01:02:35.128: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100251623s Jul 15 01:02:37.531: INFO: Number of nodes with available pods: 0 Jul 15 01:02:37.531: INFO: Number of running nodes: 0, number of available pods: 0 Jul 15 01:02:37.533: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9526/daemonsets","resourceVersion":"1240939"},"items":null} Jul 15 01:02:37.535: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9526/pods","resourceVersion":"1240939"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:02:37.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9526" for this suite. • [SLOW TEST:37.968 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":294,"completed":272,"skipped":4419,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:02:37.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 01:02:37.693: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jul 15 01:02:39.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 create -f -' Jul 15 01:02:43.129: INFO: stderr: "" Jul 15 01:02:43.129: INFO: stdout: "e2e-test-crd-publish-openapi-9467-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 15 01:02:43.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 delete e2e-test-crd-publish-openapi-9467-crds test-foo' Jul 15 01:02:43.244: INFO: stderr: "" Jul 15 01:02:43.244: INFO: stdout: "e2e-test-crd-publish-openapi-9467-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jul 15 01:02:43.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 apply -f -' Jul 15 01:02:43.533: INFO: stderr: "" Jul 15 01:02:43.533: INFO: stdout: "e2e-test-crd-publish-openapi-9467-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 15 01:02:43.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 delete e2e-test-crd-publish-openapi-9467-crds test-foo' Jul 15 01:02:43.641: INFO: stderr: "" Jul 15 01:02:43.641: INFO: stdout: "e2e-test-crd-publish-openapi-9467-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jul 15 01:02:43.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 create -f -' Jul 15 01:02:43.945: INFO: rc: 1 Jul 15 01:02:43.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 apply -f -' Jul 15 01:02:44.201: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jul 15 01:02:44.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 create -f -' Jul 15 01:02:44.450: INFO: rc: 1 Jul 15 01:02:44.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 apply -f -' Jul 15 01:02:44.734: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jul 15 01:02:44.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9467-crds' Jul 15 01:02:44.993: INFO: stderr: "" Jul 15 01:02:44.993: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9467-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jul 15 01:02:44.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9467-crds.metadata' Jul 15 01:02:45.277: INFO: stderr: "" Jul 15 01:02:45.277: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9467-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jul 15 01:02:45.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9467-crds.spec' Jul 15 01:02:45.524: INFO: stderr: "" Jul 15 01:02:45.524: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9467-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jul 15 01:02:45.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9467-crds.spec.bars' Jul 15 01:02:45.815: INFO: stderr: "" Jul 15 01:02:45.815: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9467-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jul 15 01:02:45.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39087 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9467-crds.spec.bars2' Jul 15 01:02:46.101: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:02:47.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1079" for this suite. • [SLOW TEST:10.423 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":294,"completed":273,"skipped":4423,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:02:47.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 15 01:02:48.743: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 15 01:02:50.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371768, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371768, loc:(*time.Location)(0x7deddc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371768, loc:(*time.Location)(0x7deddc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730371768, loc:(*time.Location)(0x7deddc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-d96bd46c8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 15 01:02:53.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:03:06.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5907" for this suite. STEP: Destroying namespace "webhook-5907-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.189 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":294,"completed":274,"skipped":4424,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:03:06.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jul 15 01:03:06.274: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jul 15 01:03:16.665: INFO: >>> kubeConfig: /root/.kube/config Jul 15 01:03:19.567: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:03:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2808" for this suite. • [SLOW TEST:23.875 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":294,"completed":275,"skipped":4465,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:03:30.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 15 01:03:30.125: INFO: Waiting up to 5m0s for pod "pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2" in namespace "emptydir-3651" to be "Succeeded or Failed" Jul 15 01:03:30.132: INFO: Pod "pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146296ms Jul 15 01:03:32.136: INFO: Pod "pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0110182s Jul 15 01:03:34.140: INFO: Pod "pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015082232s STEP: Saw pod success Jul 15 01:03:34.140: INFO: Pod "pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2" satisfied condition "Succeeded or Failed" Jul 15 01:03:34.143: INFO: Trying to get logs from node latest-worker pod pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2 container test-container: STEP: delete the pod Jul 15 01:03:34.181: INFO: Waiting for pod pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2 to disappear Jul 15 01:03:34.200: INFO: Pod pod-ae89f5e8-ca6a-4f31-88ff-e0055f9626e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:03:34.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3651" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":276,"skipped":4515,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:03:34.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 15 01:03:42.344: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 01:03:42.351: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 01:03:44.351: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 01:03:44.354: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 01:03:46.351: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 01:03:46.356: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 01:03:48.351: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 01:03:48.355: INFO: Pod pod-with-prestop-exec-hook still exists Jul 15 01:03:50.351: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 15 01:03:50.355: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:03:50.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9927" for this suite. • [SLOW TEST:16.163 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":294,"completed":277,"skipped":4526,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:03:50.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3449 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3449 STEP: Deleting pre-stop pod Jul 15 01:04:03.516: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:03.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3449" for this suite. • [SLOW TEST:13.202 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":294,"completed":278,"skipped":4526,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:03.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-cdb75317-0590-4ae0-8520-2676edda6385 in namespace container-probe-9332 Jul 15 01:04:07.821: INFO: Started pod liveness-cdb75317-0590-4ae0-8520-2676edda6385 in namespace container-probe-9332 STEP: checking the pod's current state and verifying that restartCount is present Jul 15 01:04:07.824: INFO: Initial restart count of pod liveness-cdb75317-0590-4ae0-8520-2676edda6385 is 0 Jul 15 01:04:23.878: INFO: Restart count of pod container-probe-9332/liveness-cdb75317-0590-4ae0-8520-2676edda6385 is now 1 (16.053943316s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:23.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9332" for this suite. • [SLOW TEST:20.374 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":279,"skipped":4533,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:23.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 15 01:04:24.294: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 15 01:04:29.329: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:30.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7555" for this suite. • [SLOW TEST:6.517 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":294,"completed":280,"skipped":4577,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:30.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 01:04:30.682: INFO: Creating ReplicaSet my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c Jul 15 01:04:30.810: INFO: Pod name my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c: Found 0 pods out of 1 Jul 15 01:04:36.051: INFO: Pod name my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c: Found 1 pods out of 1 Jul 15 01:04:36.051: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c" is running Jul 15 01:04:36.054: INFO: Pod "my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c-dw2sb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 01:04:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 01:04:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 01:04:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-15 01:04:30 +0000 UTC Reason: Message:}]) Jul 15 01:04:36.054: INFO: Trying to dial the pod Jul 15 01:04:41.067: INFO: Controller my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c: Got expected result from replica 1 [my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c-dw2sb]: "my-hostname-basic-7d67fffe-1ec4-4280-9731-1dac85604e4c-dw2sb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:41.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-519" for this suite. • [SLOW TEST:10.610 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":281,"skipped":4595,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:41.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0715 01:04:42.218116 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jul 15 01:04:44.267: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:44.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6537" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":294,"completed":282,"skipped":4616,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:44.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 01:04:44.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2" in namespace "downward-api-7949" to be "Succeeded or Failed" Jul 15 01:04:44.392: INFO: Pod "downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.726269ms Jul 15 01:04:46.518: INFO: Pod "downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170960898s Jul 15 01:04:48.522: INFO: Pod "downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175211031s STEP: Saw pod success Jul 15 01:04:48.522: INFO: Pod "downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2" satisfied condition "Succeeded or Failed" Jul 15 01:04:48.525: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2 container client-container: STEP: delete the pod Jul 15 01:04:48.573: INFO: Waiting for pod downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2 to disappear Jul 15 01:04:48.679: INFO: Pod downwardapi-volume-8743c688-3f61-4a69-b3b4-0f7f7f4c42d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:04:48.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7949" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":283,"skipped":4658,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:04:48.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 15 01:04:57.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:04:57.810: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:04:59.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:04:59.815: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:05:01.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:05:01.815: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:05:03.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:05:03.814: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:05:05.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:05:05.817: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:05:07.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:05:07.823: INFO: Pod pod-with-poststart-exec-hook still exists Jul 15 01:05:09.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 15 01:05:09.814: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:09.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5811" for this suite. • [SLOW TEST:21.110 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":294,"completed":284,"skipped":4719,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:09.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:10.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4769" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":294,"completed":285,"skipped":4744,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:10.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 15 01:05:10.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242" in namespace "projected-5746" to be "Succeeded or Failed" Jul 15 01:05:10.224: INFO: Pod "downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242": Phase="Pending", Reason="", readiness=false. Elapsed: 9.199454ms Jul 15 01:05:12.231: INFO: Pod "downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015669961s Jul 15 01:05:14.235: INFO: Pod "downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019991604s STEP: Saw pod success Jul 15 01:05:14.235: INFO: Pod "downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242" satisfied condition "Succeeded or Failed" Jul 15 01:05:14.238: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242 container client-container: STEP: delete the pod Jul 15 01:05:14.282: INFO: Waiting for pod downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242 to disappear Jul 15 01:05:14.299: INFO: Pod downwardapi-volume-f2f13a39-dd84-4a9b-b99d-a0757345b242 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:14.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5746" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":286,"skipped":4755,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:14.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 15 01:05:14.456: INFO: Waiting up to 5m0s for pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74" in namespace "emptydir-2406" to be "Succeeded or Failed" Jul 15 01:05:14.472: INFO: Pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74": Phase="Pending", Reason="", readiness=false. Elapsed: 16.387583ms Jul 15 01:05:16.584: INFO: Pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128624285s Jul 15 01:05:18.588: INFO: Pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74": Phase="Running", Reason="", readiness=true. Elapsed: 4.132271663s Jul 15 01:05:20.592: INFO: Pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136196724s STEP: Saw pod success Jul 15 01:05:20.592: INFO: Pod "pod-e9b39334-3161-4c62-bf99-d85bae395c74" satisfied condition "Succeeded or Failed" Jul 15 01:05:20.595: INFO: Trying to get logs from node latest-worker pod pod-e9b39334-3161-4c62-bf99-d85bae395c74 container test-container: STEP: delete the pod Jul 15 01:05:20.627: INFO: Waiting for pod pod-e9b39334-3161-4c62-bf99-d85bae395c74 to disappear Jul 15 01:05:20.653: INFO: Pod pod-e9b39334-3161-4c62-bf99-d85bae395c74 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2406" for this suite. • [SLOW TEST:6.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":287,"skipped":4764,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:20.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 15 01:05:20.756: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:24.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5467" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":294,"completed":288,"skipped":4771,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:24.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:731 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-1693 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1693 to expose endpoints map[] Jul 15 01:05:24.997: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jul 15 01:05:26.003: INFO: successfully validated that service multi-endpoint-test in namespace services-1693 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1693 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1693 to expose endpoints map[pod1:[100]] Jul 15 01:05:29.082: INFO: successfully validated that service multi-endpoint-test in namespace services-1693 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-1693 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1693 to expose endpoints map[pod1:[100] pod2:[101]] Jul 15 01:05:33.301: INFO: successfully validated that service multi-endpoint-test in namespace services-1693 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-1693 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1693 to expose endpoints map[pod2:[101]] Jul 15 01:05:33.363: INFO: successfully validated that service multi-endpoint-test in namespace services-1693 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-1693 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1693 to expose endpoints map[] Jul 15 01:05:34.402: INFO: successfully validated that service multi-endpoint-test in namespace services-1693 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:34.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1693" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:735 • [SLOW TEST:9.593 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":294,"completed":289,"skipped":4775,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:34.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-342a4569-8c63-4fe0-acea-19092c78244e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-342a4569-8c63-4fe0-acea-19092c78244e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:40.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6186" for this suite. • [SLOW TEST:6.165 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":290,"skipped":4882,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 15 01:05:40.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jul 15 01:05:45.243: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6865 pod-service-account-fdeb0b87-e65c-4ae3-bae4-5b8866aa6215 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 15 01:05:45.467: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6865 pod-service-account-fdeb0b87-e65c-4ae3-bae4-5b8866aa6215 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 15 01:05:45.675: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6865 pod-service-account-fdeb0b87-e65c-4ae3-bae4-5b8866aa6215 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 15 01:05:45.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6865" for this suite. • [SLOW TEST:5.437 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":294,"completed":291,"skipped":4883,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul 15 01:05:46.058: INFO: Running AfterSuite actions on all nodes Jul 15 01:05:46.058: INFO: Running AfterSuite actions on node 1 Jul 15 01:05:46.058: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":294,"completed":291,"skipped":4920,"failed":3,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-network] Ingress API should support creating Ingress API operations [Conformance]","[sig-network] IngressClass API should support creating IngressClass API operations [Conformance]"]} Summarizing 3 Failures: [Fail] [sig-auth] Certificates API [Privileged:ClusterAdmin] [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 [Fail] [sig-network] Ingress API [It] should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:1050 [Fail] [sig-network] IngressClass API [It] should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:210 Ran 294 of 5214 Specs in 6283.844 seconds FAIL! -- 291 Passed | 3 Failed | 0 Pending | 4920 Skipped --- FAIL: TestE2E (6283.94s) FAIL