Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630679017 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes Sep 3 14:23:38.911: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:38.915: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 3 14:23:38.940: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:23:38.983: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:23:38.983: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 3 14:23:38.983: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 3 14:23:38.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Sep 3 14:23:38.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 3 14:23:38.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 3 14:23:38.990: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Sep 3 14:23:38.990: INFO: e2e test version: v1.19.11 Sep 3 14:23:38.991: INFO: kube-apiserver version: v1.19.11 Sep 3 14:23:38.992: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:38.997: INFO: Cluster IP family: ipv4 Sep 3 14:23:38.996: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.017: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Sep 3 14:23:39.018: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.038: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Sep 3 14:23:39.024: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.041: INFO: Cluster IP family: ipv4 S ------------------------------ Sep 3 14:23:39.023: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.042: INFO: Cluster IP family: ipv4 S ------------------------------ Sep 3 14:23:39.027: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.043: INFO: Cluster IP family: ipv4 SS ------------------------------ Sep 3 14:23:39.022: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.043: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Sep 3 14:23:39.033: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.052: INFO: Cluster IP family: ipv4 Sep 3 14:23:39.033: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.052: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ Sep 3 14:23:39.039: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:23:39.056: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption Sep 3 14:23:39.067: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.074: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: enough pods, absolute => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:45.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1185" for this suite. • [SLOW TEST:6.091 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: enough pods, absolute => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Sep 3 14:23:39.245: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.248: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment reaping should cascade to its replica sets and pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Sep 3 14:23:39.250: INFO: Creating simple deployment test-new-deployment Sep 3 14:23:39.259: INFO: deployment "test-new-deployment" doesn't have the required revision set Sep 3 14:23:41.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:23:43.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:23:45.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:23:47.282: INFO: Deleting deployment test-new-deployment STEP: deleting Deployment.apps test-new-deployment in namespace deployment-5295, will wait for the garbage collector to delete the pods Sep 3 14:23:47.339: INFO: Deleting Deployment.apps test-new-deployment took: 4.613179ms Sep 3 14:23:47.539: INFO: Terminating Deployment.apps test-new-deployment pods took: 200.259998ms Sep 3 14:23:47.539: INFO: Ensuring deployment test-new-deployment was deleted Sep 3 14:23:47.621: INFO: Ensuring deployment test-new-deployment's RSes were deleted Sep 3 14:23:47.824: INFO: Ensuring deployment test-new-deployment's Pods were deleted [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 14:23:47.830: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:47.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5295" for this suite. • [SLOW TEST:8.678 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment reaping should cascade to its replica sets and pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":1,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Sep 3 14:23:39.251: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.255: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] test Deployment ReplicaSet orphaning and adoption regarding controllerRef /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:124 Sep 3 14:23:39.258: INFO: Creating Deployment "test-orphan-deployment" Sep 3 14:23:39.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Sep 3 14:23:41.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:23:43.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-dd94f59b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:23:45.266: INFO: Verifying Deployment "test-orphan-deployment" has only one ReplicaSet Sep 3 14:23:45.270: INFO: Obtaining the ReplicaSet's UID Sep 3 14:23:45.270: INFO: Checking the ReplicaSet has the right controllerRef Sep 3 14:23:45.273: INFO: Deleting Deployment "test-orphan-deployment" and orphaning its ReplicaSet STEP: Wait for the ReplicaSet to be orphaned Sep 3 14:23:47.281: INFO: Creating Deployment "test-adopt-deployment" to adopt the ReplicaSet Sep 3 14:23:47.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Sep 3 14:23:49.291: INFO: Waiting for the ReplicaSet to have the right controllerRef Sep 3 14:23:49.294: INFO: Verifying no extra ReplicaSet is created (Deployment "test-adopt-deployment" still has only one ReplicaSet after adoption) Sep 3 14:23:49.297: INFO: Verifying the ReplicaSet has the same UID as the orphaned ReplicaSet [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 14:23:49.302: INFO: Deployment "test-adopt-deployment": &Deployment{ObjectMeta:{test-adopt-deployment deployment-1585 /apis/apps/v1/namespaces/deployment-1585/deployments/test-adopt-deployment 2a6d28e1-ae9c-4d84-bb45-b14592ee5be2 1070591 1 2021-09-03 14:23:47 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-09-03 14:23:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 14:23:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b40fa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-09-03 14:23:47 +0000 UTC,LastTransitionTime:2021-09-03 14:23:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-orphan-deployment-dd94f59b7" has successfully progressed.,LastUpdateTime:2021-09-03 14:23:47 +0000 UTC,LastTransitionTime:2021-09-03 14:23:47 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 3 14:23:49.305: INFO: New ReplicaSet "test-orphan-deployment-dd94f59b7" of Deployment "test-adopt-deployment": &ReplicaSet{ObjectMeta:{test-orphan-deployment-dd94f59b7 deployment-1585 /apis/apps/v1/namespaces/deployment-1585/replicasets/test-orphan-deployment-dd94f59b7 8e674e65-c693-453c-9861-979603fd056d 1070588 1 2021-09-03 14:23:39 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-adopt-deployment 2a6d28e1-ae9c-4d84-bb45-b14592ee5be2 0xc0026c3377 0xc0026c3378}] [] [{kube-controller-manager Update apps/v1 2021-09-03 14:23:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a6d28e1-ae9c-4d84-bb45-b14592ee5be2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026c33e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 3 14:23:49.309: INFO: Pod "test-orphan-deployment-dd94f59b7-52tsl" is available: &Pod{ObjectMeta:{test-orphan-deployment-dd94f59b7-52tsl test-orphan-deployment-dd94f59b7- deployment-1585 /api/v1/namespaces/deployment-1585/pods/test-orphan-deployment-dd94f59b7-52tsl df7bae0b-b1d9-4db5-a6ba-5cbf5a5b403f 1070472 0 2021-09-03 14:23:39 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet test-orphan-deployment-dd94f59b7 8e674e65-c693-453c-9861-979603fd056d 0xc002b41427 0xc002b41428}] [] [{kube-controller-manager Update v1 2021-09-03 14:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e674e65-c693-453c-9861-979603fd056d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 14:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.249\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-js46v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-js46v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-js46v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:23:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:23:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:23:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:23:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.249,StartTime:2021-09-03 14:23:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:23:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ddca7cfbdd84bcfe3bdaa011504e9918e0c685f149aecd89b5c4eff51a55eaf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:49.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1585" for this suite. • [SLOW TEST:10.103 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 test Deployment ReplicaSet orphaning and adoption regarding controllerRef /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:124 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":1,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:45.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: too few pods, absolute => should not allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:51.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4661" for this suite. • [SLOW TEST:6.129 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: too few pods, absolute => should not allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job Sep 3 14:23:39.429: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.432: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks succeed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:48 STEP: Creating a job STEP: Ensuring job reaches completions STEP: Ensuring pods for job exist [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:51.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2085" for this suite. • [SLOW TEST:12.058 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks succeed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:48 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":2,"skipped":139,"failed":0} S ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":1,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job Sep 3 14:23:39.249: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.253: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to exceed backoffLimit /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235 STEP: Creating a job STEP: Ensuring job exceed backofflimit STEP: Checking that 2 pod created and status is failed [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:57.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9686" for this suite. • [SLOW TEST:18.148 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should fail to exceed backoffLimit /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235 ------------------------------ {"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:49.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: no PDB => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: locating a running pod STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:23:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2875" for this suite. • [SLOW TEST:8.203 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: no PDB => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":2,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:50.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: maxUnavailable allow single eviction, percentage => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod STEP: Waiting for all pods to be running Sep 3 14:23:56.276: INFO: running pods: 3 < 10 Sep 3 14:23:58.283: INFO: running pods: 7 < 10 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:00.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8708" for this suite. • [SLOW TEST:10.110 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: maxUnavailable allow single eviction, percentage => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":2,"skipped":1514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:51.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] should block an eviction until the PDB is updated to allow it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:273 STEP: Creating a pdb that targets all three pods in a test replica set STEP: Waiting for the pdb to be processed STEP: First trying to evict a pod which shouldn't be evictable STEP: Waiting for all pods to be running Sep 3 14:23:53.525: INFO: pods: 0 < 3 Sep 3 14:23:55.529: INFO: running pods: 0 < 3 Sep 3 14:23:57.529: INFO: running pods: 0 < 3 Sep 3 14:23:59.529: INFO: running pods: 0 < 3 STEP: locating a running pod STEP: Updating the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running STEP: Waiting for the pdb to observed all healthy pods STEP: Patching the pdb to disallow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Sep 3 14:24:05.590: INFO: running pods: 2 < 3 Sep 3 14:24:07.594: INFO: running pods: 2 < 3 STEP: locating a running pod STEP: Deleting the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be deleted STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:09.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6681" for this suite. • [SLOW TEST:18.452 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should block an eviction until the PDB is updated to allow it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:273 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":3,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:57.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: enough pods, replicaSet, percentage => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod STEP: Waiting for all pods to be running Sep 3 14:24:05.854: INFO: running pods: 2 < 10 Sep 3 14:24:07.860: INFO: running pods: 4 < 10 Sep 3 14:24:09.932: INFO: running pods: 6 < 10 Sep 3 14:24:11.860: INFO: running pods: 7 < 10 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:13.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3096" for this suite. • [SLOW TEST:16.084 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: enough pods, replicaSet, percentage => should allow an eviction /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Sep 3 14:23:39.255: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.258: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] iterative rollouts should eventually progress /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:121 Sep 3 14:23:39.260: INFO: Creating deployment "webserver" Sep 3 14:23:39.263: INFO: 00: rolling back a rollout for deployment "webserver" Sep 3 14:23:39.269: INFO: Updating deployment webserver Sep 3 14:23:39.644: INFO: 01: scaling deployment "webserver" Sep 3 14:23:39.649: INFO: 01: scaling up Sep 3 14:23:39.655: INFO: Updating deployment webserver Sep 3 14:23:40.440: INFO: 02: triggering a new rollout for deployment "webserver" Sep 3 14:23:40.449: INFO: Updating deployment webserver Sep 3 14:23:40.449: INFO: 03: scaling deployment "webserver" Sep 3 14:23:40.451: INFO: 03: scaling down Sep 3 14:23:40.455: INFO: Updating deployment webserver Sep 3 14:23:40.455: INFO: 04: arbitrarily deleting one or more deployment pods for deployment "webserver" Sep 3 14:23:40.460: INFO: 04: deleting deployment pod "webserver-dd94f59b7-fcv5b" Sep 3 14:23:40.467: INFO: 04: deleting deployment pod "webserver-dd94f59b7-fflhg" Sep 3 14:23:40.472: INFO: 04: deleting deployment pod "webserver-dd94f59b7-ghscm" Sep 3 14:23:40.483: INFO: 04: deleting deployment pod "webserver-dd94f59b7-jdw9t" Sep 3 14:23:40.493: INFO: 05: rolling back a rollout for deployment "webserver" Sep 3 14:23:40.503: INFO: Updating deployment webserver Sep 3 14:23:40.503: INFO: 06: triggering a new rollout for deployment "webserver" Sep 3 14:23:40.508: INFO: 06: scaling up Sep 3 14:23:42.515: INFO: 06: scaling down Sep 3 14:23:42.520: INFO: Updating deployment webserver Sep 3 14:23:42.571: INFO: 07: rolling back a rollout for deployment "webserver" Sep 3 14:23:42.625: INFO: Updating deployment webserver Sep 3 14:23:42.625: INFO: 08: rolling back a rollout for deployment "webserver" Sep 3 14:23:42.633: INFO: Updating deployment webserver Sep 3 14:23:42.633: INFO: 09: triggering a new rollout for deployment "webserver" Sep 3 14:23:42.639: INFO: Updating deployment webserver Sep 3 14:23:42.639: INFO: 10: resuming deployment "webserver" Sep 3 14:23:42.642: INFO: 10: scaling down Sep 3 14:23:42.645: INFO: Updating deployment webserver Sep 3 14:23:47.305: INFO: 11: triggering a new rollout for deployment "webserver" Sep 3 14:23:47.312: INFO: 11: scaling up Sep 3 14:23:47.325: INFO: Updating deployment webserver Sep 3 14:23:48.799: INFO: 12: arbitrarily deleting one or more deployment pods for deployment "webserver" Sep 3 14:23:48.816: INFO: 12: deleting deployment pod "webserver-7748f58bfd-gkspg" Sep 3 14:23:48.926: INFO: 12: deleting deployment pod "webserver-86766955d6-46qpb" Sep 3 14:23:48.935: INFO: 12: deleting deployment pod "webserver-86766955d6-czq2c" Sep 3 14:23:48.944: INFO: 12: deleting deployment pod "webserver-86766955d6-gv4wh" Sep 3 14:23:48.953: INFO: 13: scaling deployment "webserver" Sep 3 14:23:48.958: INFO: 13: scaling up Sep 3 14:23:48.964: INFO: Updating deployment webserver Sep 3 14:23:48.964: INFO: 14: scaling deployment "webserver" Sep 3 14:23:48.966: INFO: 14: scaling down Sep 3 14:23:49.016: INFO: Updating deployment webserver Sep 3 14:23:49.016: INFO: 15: resuming deployment "webserver" Sep 3 14:23:49.121: INFO: 15: scaling up Sep 3 14:23:49.217: INFO: Updating deployment webserver Sep 3 14:23:54.044: INFO: 16: rolling back a rollout for deployment "webserver" Sep 3 14:23:54.052: INFO: Updating deployment webserver Sep 3 14:23:59.088: INFO: 17: rolling back a rollout for deployment "webserver" Sep 3 14:23:59.095: INFO: Updating deployment webserver Sep 3 14:24:02.798: INFO: 18: arbitrarily deleting one or more deployment pods for deployment "webserver" Sep 3 14:24:02.802: INFO: 18: deleting deployment pod "webserver-86766955d6-dj5gn" Sep 3 14:24:02.809: INFO: 19: triggering a new rollout for deployment "webserver" Sep 3 14:24:02.815: INFO: Updating deployment webserver Sep 3 14:24:02.819: INFO: Waiting for deployment "webserver" to be observed by the controller Sep 3 14:24:04.826: INFO: Waiting for deployment "webserver" status Sep 3 14:24:04.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:8, UpdatedReplicas:3, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:4, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275842, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275842, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275842, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:06.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:8, UpdatedReplicas:4, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:08.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:8, UpdatedReplicas:5, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275848, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:10.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:8, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275848, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:12.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:8, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275848, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:14.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:7, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275854, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:16.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:6, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275856, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:18.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:22, Replicas:6, UpdatedReplicas:6, ReadyReplicas:5, AvailableReplicas:5, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275845, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275856, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766275819, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-8f77db44b\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 14:24:20.833: INFO: Checking deployment "webserver" for a complete condition [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 14:24:20.840: INFO: Deployment "webserver": &Deployment{ObjectMeta:{webserver deployment-5231 /apis/apps/v1/namespaces/deployment-5231/deployments/webserver 664da464-cecb-4f0d-bcbb-53b5ca544f94 1071970 22 2021-09-03 14:23:39 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:10] [] [] []},Spec:DeploymentSpec{Replicas:*6,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 6 nil} {A 11 nil} {A 19 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00315bac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*2,Paused:false,ProgressDeadlineSeconds:*30,},Status:DeploymentStatus{ObservedGeneration:22,Replicas:6,UpdatedReplicas:6,AvailableReplicas:6,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-09-03 14:24:05 +0000 UTC,LastTransitionTime:2021-09-03 14:24:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "webserver-8f77db44b" has successfully progressed.,LastUpdateTime:2021-09-03 14:24:19 +0000 UTC,LastTransitionTime:2021-09-03 14:23:39 +0000 UTC,},},ReadyReplicas:6,CollisionCount:nil,},} Sep 3 14:24:20.844: INFO: New ReplicaSet "webserver-8f77db44b" of Deployment "webserver": &ReplicaSet{ObjectMeta:{webserver-8f77db44b deployment-5231 /apis/apps/v1/namespaces/deployment-5231/replicasets/webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 1071968 5 2021-09-03 14:24:02 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[deployment.kubernetes.io/desired-replicas:6 deployment.kubernetes.io/max-replicas:8 deployment.kubernetes.io/revision:10] [{apps/v1 Deployment webserver 664da464-cecb-4f0d-bcbb-53b5ca544f94 0xc003514110 0xc003514111}] [] []},Spec:ReplicaSetSpec{Replicas:*6,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 8f77db44b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 6 nil} {A 11 nil} {A 19 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003514170 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:6,FullyLabeledReplicas:6,ObservedGeneration:5,ReadyReplicas:6,AvailableReplicas:6,Conditions:[]ReplicaSetCondition{},},} Sep 3 14:24:20.844: INFO: All old ReplicaSets of Deployment "webserver": Sep 3 14:24:20.845: INFO: &ReplicaSet{ObjectMeta:{webserver-86766955d6 deployment-5231 /apis/apps/v1/namespaces/deployment-5231/replicasets/webserver-86766955d6 efaf7087-8d5e-446c-b1a0-5fd5d61aaceb 1071602 10 2021-09-03 14:23:42 +0000 UTC map[name:httpd pod-template-hash:86766955d6] map[deployment.kubernetes.io/desired-replicas:6 deployment.kubernetes.io/max-replicas:8 deployment.kubernetes.io/revision:8 deployment.kubernetes.io/revision-history:4,6] [{apps/v1 Deployment webserver 664da464-cecb-4f0d-bcbb-53b5ca544f94 0xc003514040 0xc003514041}] [] [{kube-controller-manager Update apps/v1 2021-09-03 14:24:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{},"f:deployment.kubernetes.io/revision-history":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"664da464-cecb-4f0d-bcbb-53b5ca544f94\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"A\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 86766955d6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:86766955d6] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 6 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035140a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:10,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 14:24:20.845: INFO: &ReplicaSet{ObjectMeta:{webserver-5c48b4c4f8 deployment-5231 /apis/apps/v1/namespaces/deployment-5231/replicasets/webserver-5c48b4c4f8 026ce772-6325-4599-9c3d-24a4185e220f 1071922 9 2021-09-03 14:23:47 +0000 UTC map[name:httpd pod-template-hash:5c48b4c4f8] map[deployment.kubernetes.io/desired-replicas:6 deployment.kubernetes.io/max-replicas:8 deployment.kubernetes.io/revision:9 deployment.kubernetes.io/revision-history:7] [{apps/v1 Deployment webserver 664da464-cecb-4f0d-bcbb-53b5ca544f94 0xc00315bf80 0xc00315bf81}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5c48b4c4f8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5c48b4c4f8] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [{A 6 nil} {A 11 nil}] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00315bfd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:9,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 14:24:20.849: INFO: Pod "webserver-8f77db44b-5xpsb" is available: &Pod{ObjectMeta:{webserver-8f77db44b-5xpsb webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-5xpsb fa236911-8a4c-4db1-be85-f3e044cb04cf 1071589 0 2021-09-03 14:24:02 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc0035147a7 0xc0035147a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.29,StartTime:2021-09-03 14:24:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cca29e0bb9e029d615cdbe9aa4ad688974552ddf0b06507842b0acdd637e8bab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 14:24:20.850: INFO: Pod "webserver-8f77db44b-cvvd8" is available: &Pod{ObjectMeta:{webserver-8f77db44b-cvvd8 webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-cvvd8 2da52ca3-4c7c-494c-ad9b-46da55263c08 1071912 0 2021-09-03 14:24:08 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc003514917 0xc003514918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.36,StartTime:2021-09-03 14:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://157d7f14786ed64d40d48c20c28b43b532e6f51870799faab3be9db9f3feb61d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 14:24:20.850: INFO: Pod "webserver-8f77db44b-dr4zq" is available: &Pod{ObjectMeta:{webserver-8f77db44b-dr4zq webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-dr4zq 2b0b53bc-da94-4962-8183-75ae80c465bd 1071833 0 2021-09-03 14:24:08 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc003514a87 0xc003514a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.64,StartTime:2021-09-03 14:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f97996b8c8ceec8610d81a3c6853f85ccf95c947c2ff3e71796e10c503607c62,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 14:24:20.850: INFO: Pod "webserver-8f77db44b-f2c7m" is available: &Pod{ObjectMeta:{webserver-8f77db44b-f2c7m webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-f2c7m 6b79a939-32cf-484f-9b52-3453365320bd 1071680 0 2021-09-03 14:24:02 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc003514bf7 0xc003514bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.62,StartTime:2021-09-03 14:24:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a011f24cf030853710734e7d9763d8d3ba3a0eb9d6250b752a8e36909903dcd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 14:24:20.851: INFO: Pod "webserver-8f77db44b-jphwg" is available: &Pod{ObjectMeta:{webserver-8f77db44b-jphwg webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-jphwg a96142b7-bc2b-4286-9daa-7b6eb2b72ade 1071651 0 2021-09-03 14:24:02 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc003514d67 0xc003514d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.60,StartTime:2021-09-03 14:24:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a3b5aba78226b98deb33c17da95fa7f36845efbf7b1c9c7aa70e7fd078775b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 14:24:20.851: INFO: Pod "webserver-8f77db44b-wdjfn" is available: &Pod{ObjectMeta:{webserver-8f77db44b-wdjfn webserver-8f77db44b- deployment-5231 /api/v1/namespaces/deployment-5231/pods/webserver-8f77db44b-wdjfn 84c5a2be-5255-4748-9f79-88602013c022 1071966 0 2021-09-03 14:24:05 +0000 UTC map[name:httpd pod-template-hash:8f77db44b] map[] [{apps/v1 ReplicaSet webserver-8f77db44b 4f0d550d-b19c-4a7c-9655-bd287762857c 0xc003514ed7 0xc003514ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pdp7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pdp7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:A,Value:6,ValueFrom:nil,},EnvVar{Name:A,Value:11,ValueFrom:nil,},EnvVar{Name:A,Value:19,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pdp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 14:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.35,StartTime:2021-09-03 14:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 14:24:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1c447e242515253e2010fa6d0d31a0bf35df3af9b78dbce00cf710f7e87368b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:20.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5231" for this suite. • [SLOW TEST:41.642 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 iterative rollouts should eventually progress /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:121 ------------------------------ {"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":137,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:20.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] should observe PodDisruptionBudget status updated /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:97 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Sep 3 14:24:22.973: INFO: running pods: 0 < 3 Sep 3 14:24:24.978: INFO: running pods: 0 < 3 Sep 3 14:24:26.978: INFO: running pods: 0 < 3 Sep 3 14:24:29.017: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:30.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2757" for this suite. • [SLOW TEST:10.110 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:97 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated","total":-1,"completed":2,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:31.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a private image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:98 Sep 3 14:24:31.083: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:31.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6682" for this suite. S [SKIPPING] [0.041 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a private image [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:98 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:100 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:31.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:105 STEP: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating replica set "condition-test" that asks for more than the allowed pod quota STEP: Checking replica set "condition-test" has the desired failure condition set STEP: Scaling down replica set "condition-test" to satisfy pod quota Sep 3 14:24:34.031: INFO: Updating replica set "condition-test" STEP: Checking replica set "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:35.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9531" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":3,"skipped":630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:00.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should remove pods when job is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:75 STEP: Creating a job STEP: Ensure pods equal to paralellism count is attached to the job STEP: Delete the job STEP: deleting Job.batch all-pods-removed in namespace job-9435, will wait for the garbage collector to delete the pods Sep 3 14:24:08.704: INFO: Deleting Job.batch all-pods-removed took: 5.56827ms Sep 3 14:24:08.805: INFO: Terminating Job.batch all-pods-removed pods took: 100.282598ms STEP: Ensure the pods associated with the job are also deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:42.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9435" for this suite. • [SLOW TEST:41.539 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should remove pods when job is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:75 ------------------------------ {"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":3,"skipped":1675,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob Sep 3 14:23:39.241: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.244: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should remove from active list jobs that have been deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:197 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Deleting the job STEP: deleting Job.batch forbid-1630679040 in namespace cronjob-1232, will wait for the garbage collector to delete the pods Sep 3 14:24:05.319: INFO: Deleting Job.batch forbid-1630679040 took: 6.992327ms Sep 3 14:24:05.819: INFO: Terminating Job.batch forbid-1630679040 pods took: 500.236804ms STEP: Ensuring job was deleted STEP: Ensuring the job is not in the cronjob active list STEP: Ensuring MissingJob event has occurred STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:48.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1232" for this suite. • [SLOW TEST:69.293 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should remove from active list jobs that have been deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:197 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:48.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a private image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:68 Sep 3 14:24:48.583: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:48.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9519" for this suite. S [SKIPPING] [0.041 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a private image [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:68 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:70 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:48.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] should not disrupt a cloud load-balancer's connectivity during rollout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:137 Sep 3 14:24:48.935: INFO: Only supported for providers [aws azure gce gke] (not skeleton) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 14:24:48.940: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:48.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5181" for this suite. S [SKIPPING] [0.208 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not disrupt a cloud load-balancer's connectivity during rollout [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:137 Only supported for providers [aws azure gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:138 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:49.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] should create a PodDisruptionBudget /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:93 STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:51.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-103" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":2,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:51.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are not locally restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117 STEP: Looking for a node to schedule job pod STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:24:57.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1746" for this suite. • [SLOW TEST:6.055 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are not locally restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":3,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:51.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should schedule multiple jobs concurrently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6063" for this suite. • [SLOW TEST:72.428 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":2,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob Sep 3 14:23:39.750: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.754: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should replace jobs when ReplaceConcurrent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:142 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one Sep 3 14:25:03.775: INFO: Warning: Found 0 jobs in namespace cronjob-9277 STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:05.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9277" for this suite. • [SLOW TEST:86.072 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:142 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":1,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:04.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:04.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:77 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-351 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:10.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-8528" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:10.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-351" for this suite. • [SLOW TEST:6.135 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:74 should list and delete a collection of PodDisruptionBudgets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:77 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets","total":-1,"completed":3,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:10.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] should update/patch PodDisruptionBudget status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:115 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Sep 3 14:25:12.789: INFO: running pods: 0 < 1 Sep 3 14:25:14.794: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:16.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2660" for this suite. • [SLOW TEST:6.096 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:115 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status","total":-1,"completed":4,"skipped":693,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Sep 3 14:23:39.237: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.240: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3187 [It] should not deadlock when a pod's predecessor fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248 STEP: Creating statefulset ss in namespace statefulset-3187 Sep 3 14:23:39.248: INFO: Default storage class: "local-path" Sep 3 14:23:39.255: INFO: Found 0 stateful pods, waiting for 1 Sep 3 14:23:49.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false STEP: Resuming stateful pod at index 0. Sep 3 14:23:49.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3187 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:23:50.636: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:23:50.636: INFO: stdout: "" Sep 3 14:23:50.636: INFO: Resumed pod ss-0 STEP: Waiting for stateful pod at index 1 to enter running. Sep 3 14:23:50.639: INFO: Found 1 stateful pods, waiting for 2 Sep 3 14:24:00.642: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:00.642: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Sep 3 14:24:10.644: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:10.644: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Sep 3 14:24:20.644: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:20.644: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false STEP: Deleting healthy stateful pod at index 0. STEP: Confirming stateful pod at index 0 is recreated. Sep 3 14:24:20.656: INFO: Found 1 stateful pods, waiting for 2 Sep 3 14:24:30.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:30.661: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false STEP: Resuming stateful pod at index 1. Sep 3 14:24:30.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-3187 exec ss-1 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:24:31.191: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:24:31.191: INFO: stdout: "" Sep 3 14:24:31.191: INFO: Resumed pod ss-1 STEP: Confirming all stateful pods in statefulset are created. Sep 3 14:24:31.196: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:31.196: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Sep 3 14:24:41.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:41.199: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 14:24:41.199: INFO: Deleting all statefulset in ns statefulset-3187 Sep 3 14:24:41.201: INFO: Scaling statefulset ss to 0 Sep 3 14:25:11.217: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:25:11.221: INFO: Deleting statefulset ss Sep 3 14:25:11.230: INFO: Deleting pvc: datadir-ss-0 with volume pvc-0e6ebbe3-7fe4-4183-93b9-859c3b11c73e Sep 3 14:25:11.235: INFO: Deleting pvc: datadir-ss-1 with volume pvc-47be8132-fe73-4ac3-989a-a2b505f542ff Sep 3 14:25:11.244: INFO: Still waiting for pvs of statefulset to disappear: pvc-0e6ebbe3-7fe4-4183-93b9-859c3b11c73e: {Phase:Bound Message: Reason:} pvc-47be8132-fe73-4ac3-989a-a2b505f542ff: {Phase:Bound Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3187" for this suite. • [SLOW TEST:102.109 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not deadlock when a pod's predecessor fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":1,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:21.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should fail when exceeds active deadline /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:139 STEP: Creating a job STEP: Ensuring job past active deadline [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:23.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9742" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":2,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Sep 3 14:25:24.068: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:16.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1932 [It] should adopt matching orphans and release non-matching pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:163 STEP: Creating statefulset ss in namespace statefulset-1932 Sep 3 14:25:16.897: INFO: Default storage class: "local-path" STEP: Saturating stateful set ss Sep 3 14:25:16.901: INFO: Waiting for stateful pod at index 0 to enter Running Sep 3 14:25:16.904: INFO: Found 0 stateful pods, waiting for 1 Sep 3 14:25:26.908: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 14:25:26.908: INFO: Resuming stateful pod at index 0 Sep 3 14:25:26.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-1932 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:25:27.212: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:25:27.212: INFO: stdout: "" Sep 3 14:25:27.212: INFO: Resumed pod ss-0 STEP: Checking that stateful set pods are created with ControllerRef STEP: Orphaning one of the stateful set's pods Sep 3 14:25:27.728: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set readopts the pod Sep 3 14:25:27.728: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-1932" to be "adopted" Sep 3 14:25:27.731: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.861308ms Sep 3 14:25:29.734: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.006120032s Sep 3 14:25:29.734: INFO: Pod "ss-0" satisfied condition "adopted" STEP: Removing the labels from one of the stateful set's pods Sep 3 14:25:30.247: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set releases the pod Sep 3 14:25:30.247: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-1932" to be "released" Sep 3 14:25:30.250: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.989295ms Sep 3 14:25:32.254: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.007676021s Sep 3 14:25:32.254: INFO: Pod "ss-0" satisfied condition "released" STEP: Readding labels to the stateful set's pod Sep 3 14:25:32.767: INFO: Successfully updated pod "ss-0" STEP: Checking that the stateful set readopts the pod Sep 3 14:25:32.767: INFO: Waiting up to 10m0s for pod "ss-0" in namespace "statefulset-1932" to be "adopted" Sep 3 14:25:32.770: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 3.11209ms Sep 3 14:25:34.774: INFO: Pod "ss-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.006897381s Sep 3 14:25:34.774: INFO: Pod "ss-0" satisfied condition "adopted" [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 14:25:34.774: INFO: Deleting all statefulset in ns statefulset-1932 Sep 3 14:25:34.777: INFO: Scaling statefulset ss to 0 Sep 3 14:25:44.792: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:25:44.795: INFO: Deleting statefulset ss Sep 3 14:25:44.803: INFO: Deleting pvc: datadir-ss-0 with volume pvc-5ca70baa-d1bc-401b-a1ba-299e772ed2b1 Sep 3 14:25:44.812: INFO: Still waiting for pvs of statefulset to disappear: pvc-5ca70baa-d1bc-401b-a1ba-299e772ed2b1: {Phase:Bound Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:25:54.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1932" for this suite. • [SLOW TEST:37.976 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should adopt matching orphans and release non-matching pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:163 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":5,"skipped":700,"failed":0} Sep 3 14:25:54.833: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:57.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9240 [It] should implement legacy replacement when the update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:499 STEP: Creating a new StatefulSet Sep 3 14:24:57.497: INFO: Found 0 stateful pods, waiting for 3 Sep 3 14:25:07.502: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:07.502: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:07.502: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Restoring Pods to the current revision Sep 3 14:25:07.540: INFO: Found 1 stateful pods, waiting for 3 Sep 3 14:25:17.545: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:17.545: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:17.545: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 3 14:25:17.571: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Recreating Pods at the new revision Sep 3 14:25:27.608: INFO: Found 0 stateful pods, waiting for 3 Sep 3 14:25:37.612: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:37.612: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:37.612: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 14:25:37.619: INFO: Deleting all statefulset in ns statefulset-9240 Sep 3 14:25:37.622: INFO: Scaling statefulset ss2 to 0 Sep 3 14:26:07.719: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:26:07.722: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:26:07.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9240" for this suite. • [SLOW TEST:70.292 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should implement legacy replacement when the update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:499 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":4,"skipped":431,"failed":0} Sep 3 14:26:07.748: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:09.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8753 [It] should provide basic identity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:124 STEP: Creating statefulset ss in namespace statefulset-8753 Sep 3 14:24:10.050: INFO: Default storage class: "local-path" STEP: Saturating stateful set ss Sep 3 14:24:10.055: INFO: Waiting for stateful pod at index 0 to enter Running Sep 3 14:24:10.058: INFO: Found 0 stateful pods, waiting for 1 Sep 3 14:24:20.062: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 14:24:20.062: INFO: Resuming stateful pod at index 0 Sep 3 14:24:20.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:24:20.319: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:24:20.319: INFO: stdout: "" Sep 3 14:24:20.319: INFO: Resumed pod ss-0 Sep 3 14:24:20.319: INFO: Waiting for stateful pod at index 1 to enter Running Sep 3 14:24:20.322: INFO: Found 1 stateful pods, waiting for 2 Sep 3 14:24:30.328: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:30.328: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Pending - Ready=false Sep 3 14:24:40.417: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:40.417: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 3 14:24:40.417: INFO: Resuming stateful pod at index 1 Sep 3 14:24:40.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:24:40.826: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:24:40.826: INFO: stdout: "" Sep 3 14:24:40.826: INFO: Resumed pod ss-1 Sep 3 14:24:40.826: INFO: Waiting for stateful pod at index 2 to enter Running Sep 3 14:24:41.021: INFO: Found 2 stateful pods, waiting for 3 Sep 3 14:24:51.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:51.027: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:24:51.027: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 3 14:24:51.027: INFO: Resuming stateful pod at index 2 Sep 3 14:24:51.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Sep 3 14:24:51.272: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Sep 3 14:24:51.273: INFO: stdout: "" Sep 3 14:24:51.273: INFO: Resumed pod ss-2 STEP: Verifying statefulset mounted data directory is usable Sep 3 14:24:51.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:24:51.490: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:24:51.490: INFO: stdout: "26108133 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:24:51.490: INFO: stdout of ls -idlh /data on ss-0: 26108133 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:24:51.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:24:51.770: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:24:51.770: INFO: stdout: "28976547 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:24:51.770: INFO: stdout of ls -idlh /data on ss-1: 28976547 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:24:51.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:24:51.989: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:24:51.989: INFO: stdout: " 13971 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:24:51.989: INFO: stdout of ls -idlh /data on ss-2: 13971 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:24:51.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c find /data' Sep 3 14:24:52.216: INFO: stderr: "+ find /data\n" Sep 3 14:24:52.216: INFO: stdout: "/data\n/data/statefulset-continue\n" Sep 3 14:24:52.216: INFO: stdout of find /data on ss-0: /data /data/statefulset-continue Sep 3 14:24:52.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c find /data' Sep 3 14:24:52.476: INFO: stderr: "+ find /data\n" Sep 3 14:24:52.476: INFO: stdout: "/data\n/data/statefulset-continue\n" Sep 3 14:24:52.476: INFO: stdout of find /data on ss-1: /data /data/statefulset-continue Sep 3 14:24:52.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c find /data' Sep 3 14:24:52.733: INFO: stderr: "+ find /data\n" Sep 3 14:24:52.733: INFO: stdout: "/data\n/data/statefulset-continue\n" Sep 3 14:24:52.733: INFO: stdout of find /data on ss-2: /data /data/statefulset-continue Sep 3 14:24:52.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c touch /data/1630679091273051478' Sep 3 14:24:53.009: INFO: stderr: "+ touch /data/1630679091273051478\n" Sep 3 14:24:53.010: INFO: stdout: "" Sep 3 14:24:53.010: INFO: stdout of touch /data/1630679091273051478 on ss-0: Sep 3 14:24:53.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c touch /data/1630679091273051478' Sep 3 14:24:53.259: INFO: stderr: "+ touch /data/1630679091273051478\n" Sep 3 14:24:53.259: INFO: stdout: "" Sep 3 14:24:53.259: INFO: stdout of touch /data/1630679091273051478 on ss-1: Sep 3 14:24:53.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c touch /data/1630679091273051478' Sep 3 14:24:53.481: INFO: stderr: "+ touch /data/1630679091273051478\n" Sep 3 14:24:53.481: INFO: stdout: "" Sep 3 14:24:53.481: INFO: stdout of touch /data/1630679091273051478 on ss-2: STEP: Verifying statefulset provides a stable hostname for each pod Sep 3 14:24:53.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c printf $(hostname)' Sep 3 14:24:53.727: INFO: stderr: "+ hostname\n+ printf ss-0\n" Sep 3 14:24:53.727: INFO: stdout: "ss-0" Sep 3 14:24:53.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c printf $(hostname)' Sep 3 14:24:54.028: INFO: stderr: "+ hostname\n+ printf ss-1\n" Sep 3 14:24:54.028: INFO: stdout: "ss-1" Sep 3 14:24:54.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c printf $(hostname)' Sep 3 14:24:54.274: INFO: stderr: "+ hostname\n+ printf ss-2\n" Sep 3 14:24:54.274: INFO: stdout: "ss-2" STEP: Verifying statefulset set proper service name Sep 3 14:24:54.274: INFO: Checking if statefulset spec.serviceName is test STEP: Running echo $(hostname) | dd of=/data/hostname conv=fsync in all stateful pods Sep 3 14:24:54.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Sep 3 14:24:54.530: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-0\n0+1 records in\n0+1 records out\n" Sep 3 14:24:54.530: INFO: stdout: "" Sep 3 14:24:54.530: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-0: Sep 3 14:24:54.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Sep 3 14:24:54.825: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-1\n0+1 records in\n0+1 records out\n" Sep 3 14:24:54.825: INFO: stdout: "" Sep 3 14:24:54.825: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-1: Sep 3 14:24:54.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync' Sep 3 14:24:55.060: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-2\n0+1 records in\n0+1 records out\n" Sep 3 14:24:55.060: INFO: stdout: "" Sep 3 14:24:55.060: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-2: STEP: Restarting statefulset ss Sep 3 14:24:55.060: INFO: Scaling statefulset ss to 0 Sep 3 14:25:25.079: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:25:25.095: INFO: Found 0 stateful pods, waiting for 3 Sep 3 14:25:35.099: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:35.100: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:35.100: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying statefulset mounted data directory is usable Sep 3 14:25:35.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:25:35.352: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:25:35.352: INFO: stdout: "26108133 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:25:35.352: INFO: stdout of ls -idlh /data on ss-0: 26108133 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:25:35.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:25:35.586: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:25:35.586: INFO: stdout: "28976547 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:25:35.586: INFO: stdout of ls -idlh /data on ss-1: 28976547 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:25:35.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c ls -idlh /data' Sep 3 14:25:35.802: INFO: stderr: "+ ls -idlh /data\n" Sep 3 14:25:35.802: INFO: stdout: " 13971 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data\n" Sep 3 14:25:35.802: INFO: stdout of ls -idlh /data on ss-2: 13971 drwxrwxrwx 2 root root 4.0K Sep 3 14:24 /data Sep 3 14:25:35.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c find /data' Sep 3 14:25:36.020: INFO: stderr: "+ find /data\n" Sep 3 14:25:36.020: INFO: stdout: "/data\n/data/hostname\n/data/1630679091273051478\n/data/statefulset-continue\n" Sep 3 14:25:36.021: INFO: stdout of find /data on ss-0: /data /data/hostname /data/1630679091273051478 /data/statefulset-continue Sep 3 14:25:36.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c find /data' Sep 3 14:25:36.253: INFO: stderr: "+ find /data\n" Sep 3 14:25:36.253: INFO: stdout: "/data\n/data/hostname\n/data/1630679091273051478\n/data/statefulset-continue\n" Sep 3 14:25:36.253: INFO: stdout of find /data on ss-1: /data /data/hostname /data/1630679091273051478 /data/statefulset-continue Sep 3 14:25:36.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c find /data' Sep 3 14:25:36.502: INFO: stderr: "+ find /data\n" Sep 3 14:25:36.502: INFO: stdout: "/data\n/data/hostname\n/data/1630679091273051478\n/data/statefulset-continue\n" Sep 3 14:25:36.502: INFO: stdout of find /data on ss-2: /data /data/hostname /data/1630679091273051478 /data/statefulset-continue Sep 3 14:25:36.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c touch /data/1630679135100113899' Sep 3 14:25:36.756: INFO: stderr: "+ touch /data/1630679135100113899\n" Sep 3 14:25:36.756: INFO: stdout: "" Sep 3 14:25:36.756: INFO: stdout of touch /data/1630679135100113899 on ss-0: Sep 3 14:25:36.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c touch /data/1630679135100113899' Sep 3 14:25:37.017: INFO: stderr: "+ touch /data/1630679135100113899\n" Sep 3 14:25:37.017: INFO: stdout: "" Sep 3 14:25:37.017: INFO: stdout of touch /data/1630679135100113899 on ss-1: Sep 3 14:25:37.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c touch /data/1630679135100113899' Sep 3 14:25:37.273: INFO: stderr: "+ touch /data/1630679135100113899\n" Sep 3 14:25:37.273: INFO: stdout: "" Sep 3 14:25:37.273: INFO: stdout of touch /data/1630679135100113899 on ss-2: STEP: Running if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi in all stateful pods Sep 3 14:25:37.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-0 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Sep 3 14:25:37.501: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-0 '=' ss-0 ]\n+ exit 0\n" Sep 3 14:25:37.501: INFO: stdout: "" Sep 3 14:25:37.501: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-0: Sep 3 14:25:37.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-1 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Sep 3 14:25:37.788: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-1 '=' ss-1 ]\n+ exit 0\n" Sep 3 14:25:37.788: INFO: stdout: "" Sep 3 14:25:37.788: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-1: Sep 3 14:25:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8753 exec ss-2 -- /bin/sh -x -c if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi' Sep 3 14:25:38.004: INFO: stderr: "+ cat /data/hostname\n+ hostname\n+ '[' ss-2 '=' ss-2 ]\n+ exit 0\n" Sep 3 14:25:38.004: INFO: stdout: "" Sep 3 14:25:38.004: INFO: stdout of if [ "$(cat /data/hostname)" = "$(hostname)" ]; then exit 0; else exit 1; fi on ss-2: [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 14:25:38.004: INFO: Deleting all statefulset in ns statefulset-8753 Sep 3 14:25:38.008: INFO: Scaling statefulset ss to 0 Sep 3 14:25:58.027: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:25:58.030: INFO: Deleting statefulset ss Sep 3 14:25:58.039: INFO: Deleting pvc: datadir-ss-0 with volume pvc-1382b64f-def7-48a6-a9ed-9b6741c11c06 Sep 3 14:25:58.043: INFO: Deleting pvc: datadir-ss-1 with volume pvc-ed67d52f-5b2b-4dcc-8860-2ea68675208c Sep 3 14:25:58.047: INFO: Deleting pvc: datadir-ss-2 with volume pvc-3f31952b-75b6-4cb2-abf3-1278c556a2f4 Sep 3 14:25:58.056: INFO: Still waiting for pvs of statefulset to disappear: pvc-1382b64f-def7-48a6-a9ed-9b6741c11c06: {Phase:Bound Message: Reason:} pvc-3f31952b-75b6-4cb2-abf3-1278c556a2f4: {Phase:Bound Message: Reason:} pvc-ed67d52f-5b2b-4dcc-8860-2ea68675208c: {Phase:Bound Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:26:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8753" for this suite. • [SLOW TEST:118.076 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should provide basic identity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:124 ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:42.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should delete failed finished jobs with limit of one job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:247 STEP: Creating an AllowConcurrent cronjob with custom history limit STEP: Ensuring a finished job exists STEP: Ensuring a finished job exists by listing jobs explicitly STEP: Ensuring this job and its pods does not exist anymore STEP: Ensuring there is 1 finished job by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:26:16.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-899" for this suite. • [SLOW TEST:94.812 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete failed finished jobs with limit of one job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:247 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":4,"skipped":1685,"failed":0} Sep 3 14:26:16.953: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:35.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should delete successful finished jobs with limit of one successful job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:236 STEP: Creating an AllowConcurrent cronjob with custom history limit STEP: Ensuring a finished job exists STEP: Ensuring a finished job exists by listing jobs explicitly STEP: Ensuring this job and its pods does not exist anymore STEP: Ensuring there is 1 finished job by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:26:17.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6677" for this suite. • [SLOW TEST:102.075 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete successful finished jobs with limit of one successful job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:236 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":4,"skipped":738,"failed":0} Sep 3 14:26:17.503: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:24:13.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should not emit unexpected warnings /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:174 STEP: Creating a cronjob STEP: Ensuring at least two jobs and at least one finished job exists by listing jobs explicitly STEP: Ensuring no unexpected event has happened STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:26:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-310" for this suite. • [SLOW TEST:144.113 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not emit unexpected warnings /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:174 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":4,"skipped":326,"failed":0} Sep 3 14:26:38.051: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:25:05.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4608 [It] should perform rolling updates and roll backs of template modifications with PVCs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:284 STEP: Creating a new StatefulSet with PVCs Sep 3 14:25:05.978: INFO: Default storage class: "local-path" Sep 3 14:25:05.987: INFO: Found 0 stateful pods, waiting for 3 Sep 3 14:25:15.992: INFO: Found 2 stateful pods, waiting for 3 Sep 3 14:25:25.992: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:25.992: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:25.992: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true Sep 3 14:25:26.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4608 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 14:25:26.329: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 14:25:26.329: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 14:25:26.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 3 14:25:36.363: INFO: Updating stateful set ss STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 3 14:25:46.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4608 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 14:25:46.639: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 14:25:46.639: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 14:25:46.639: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 14:25:56.662: INFO: Waiting for StatefulSet statefulset-4608/ss to complete update Sep 3 14:25:56.662: INFO: Waiting for Pod statefulset-4608/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Sep 3 14:25:56.662: INFO: Waiting for Pod statefulset-4608/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 Sep 3 14:26:06.670: INFO: Waiting for StatefulSet statefulset-4608/ss to complete update Sep 3 14:26:06.670: INFO: Waiting for Pod statefulset-4608/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7 STEP: Rolling back to a previous revision Sep 3 14:26:16.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4608 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 14:26:16.948: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 14:26:16.948: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 14:26:16.948: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 14:26:26.984: INFO: Updating stateful set ss STEP: Rolling back update in reverse ordinal order Sep 3 14:26:37.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4608 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 14:26:37.278: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 14:26:37.278: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 14:26:37.278: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 14:26:47.299: INFO: Waiting for StatefulSet statefulset-4608/ss to complete update Sep 3 14:26:47.299: INFO: Waiting for Pod statefulset-4608/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 14:26:57.307: INFO: Deleting all statefulset in ns statefulset-4608 Sep 3 14:26:57.310: INFO: Scaling statefulset ss to 0 Sep 3 14:27:17.328: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 14:27:17.332: INFO: Deleting statefulset ss Sep 3 14:27:17.341: INFO: Deleting pvc: datadir-ss-0 with volume pvc-1a94aadc-ef42-4b00-a5d9-6f152b8c656e Sep 3 14:27:17.345: INFO: Deleting pvc: datadir-ss-1 with volume pvc-8cfd98bf-f3d1-443b-880b-10cff02bd688 Sep 3 14:27:17.350: INFO: Deleting pvc: datadir-ss-2 with volume pvc-127b1fb7-18c9-4f89-adaf-46ceb1b125ac Sep 3 14:27:17.358: INFO: Still waiting for pvs of statefulset to disappear: pvc-127b1fb7-18c9-4f89-adaf-46ceb1b125ac: {Phase:Bound Message: Reason:} pvc-1a94aadc-ef42-4b00-a5d9-6f152b8c656e: {Phase:Bound Message: Reason:} pvc-8cfd98bf-f3d1-443b-880b-10cff02bd688: {Phase:Bound Message: Reason:} [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:27:27.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4608" for this suite. • [SLOW TEST:141.434 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications with PVCs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:284 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":2,"skipped":575,"failed":0} Sep 3 14:27:27.376: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:39.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob Sep 3 14:23:39.259: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:23:39.261: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should not schedule jobs when suspended [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:86 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:28:39.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8864" for this suite. • [SLOW TEST:300.067 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:86 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow]","total":-1,"completed":1,"skipped":119,"failed":0} Sep 3 14:28:39.300: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:23:57.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58 [It] should not schedule new jobs when ForbidConcurrent [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:110 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:29:03.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1380" for this suite. • [SLOW TEST:306.094 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:110 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow]","total":-1,"completed":2,"skipped":201,"failed":0} Sep 3 14:29:03.625: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":4,"skipped":187,"failed":0} Sep 3 14:26:08.075: INFO: Running AfterSuite actions on all nodes Sep 3 14:29:03.697: INFO: Running AfterSuite actions on node 1 Sep 3 14:29:03.697: INFO: Skipping dumping logs from cluster Ran 32 of 5484 Specs in 324.872 seconds SUCCESS! -- 32 Passed | 0 Failed | 0 Pending | 5452 Skipped Ginkgo ran 1 suite in 5m26.602336135s Test Suite Passed