I0325 17:19:50.675907 7 e2e.go:129] Starting e2e run "e29e7a7d-608d-4830-9dd0-ee033212f644" on Ginkgo node 1 {"msg":"Test Suite starting","total":115,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616692789 - Will randomize all specs Will run 115 of 5737 specs Mar 25 17:19:50.736: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:19:50.738: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 17:19:50.757: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 17:19:50.790: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 17:19:50.790: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 17:19:50.790: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 17:19:50.801: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 17:19:50.801: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 17:19:50.801: INFO: e2e test version: v1.21.0-beta.1 Mar 25 17:19:50.802: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 17:19:50.802: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:19:50.807: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:19:50.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes Mar 25 17:19:50.920: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-6510 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 17:19:51.015: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-attacher Mar 25 17:19:51.018: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6510 Mar 25 17:19:51.018: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6510 Mar 25 17:19:51.027: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6510 Mar 25 17:19:51.046: INFO: creating *v1.Role: csi-mock-volumes-6510-8077/external-attacher-cfg-csi-mock-volumes-6510 Mar 25 17:19:51.089: INFO: creating *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-attacher-role-cfg Mar 25 17:19:51.147: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-provisioner Mar 25 17:19:51.159: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6510 Mar 25 17:19:51.159: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6510 Mar 25 17:19:51.178: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6510 Mar 25 17:19:51.195: INFO: creating *v1.Role: csi-mock-volumes-6510-8077/external-provisioner-cfg-csi-mock-volumes-6510 Mar 25 17:19:51.232: INFO: creating *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-provisioner-role-cfg Mar 25 17:19:51.291: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-resizer Mar 25 17:19:51.309: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6510 Mar 25 17:19:51.309: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6510 Mar 25 17:19:51.316: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6510 Mar 25 17:19:51.321: INFO: creating *v1.Role: csi-mock-volumes-6510-8077/external-resizer-cfg-csi-mock-volumes-6510 Mar 25 17:19:51.327: INFO: creating *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-resizer-role-cfg Mar 25 17:19:51.357: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-snapshotter Mar 25 17:19:51.410: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6510 Mar 25 17:19:51.410: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6510 Mar 25 17:19:51.413: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6510 Mar 25 17:19:51.442: INFO: creating *v1.Role: csi-mock-volumes-6510-8077/external-snapshotter-leaderelection-csi-mock-volumes-6510 Mar 25 17:19:51.472: INFO: creating *v1.RoleBinding: csi-mock-volumes-6510-8077/external-snapshotter-leaderelection Mar 25 17:19:51.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-mock Mar 25 17:19:51.488: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6510 Mar 25 17:19:51.494: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6510 Mar 25 17:19:51.500: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6510 Mar 25 17:19:51.548: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6510 Mar 25 17:19:51.574: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6510 Mar 25 17:19:51.584: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6510 Mar 25 17:19:51.590: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6510 Mar 25 17:19:51.596: INFO: creating *v1.StatefulSet: csi-mock-volumes-6510-8077/csi-mockplugin Mar 25 17:19:51.602: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6510 Mar 25 17:19:51.627: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6510" Mar 25 17:19:51.697: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6510 to register on node latest-worker I0325 17:20:00.342804 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 17:20:00.345126 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6510","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 17:20:00.390748 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0325 17:20:00.439553 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 17:20:00.440795 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6510","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 17:20:00.976374 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6510"},"Error":"","FullError":null} STEP: Creating pod Mar 25 17:20:01.245: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0325 17:20:01.359060 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 17:20:01.368228 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a"}}},"Error":"","FullError":null} I0325 17:20:02.523038 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 17:20:02.526: INFO: >>> kubeConfig: /root/.kube/config I0325 17:20:02.666567 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a","storage.kubernetes.io/csiProvisionerIdentity":"1616692800486-8081-csi-mock-csi-mock-volumes-6510"}},"Response":{},"Error":"","FullError":null} I0325 17:20:02.673644 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 17:20:02.676: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:20:02.783: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:20:02.904: INFO: >>> kubeConfig: /root/.kube/config I0325 17:20:03.000567 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a/globalmount","target_path":"/var/lib/kubelet/pods/1b7c6d6c-4f48-4e71-8667-af68c4bef24d/volumes/kubernetes.io~csi/pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a","storage.kubernetes.io/csiProvisionerIdentity":"1616692800486-8081-csi-mock-csi-mock-volumes-6510"}},"Response":{},"Error":"","FullError":null} Mar 25 17:20:07.305: INFO: Deleting pod "pvc-volume-tester-rk542" in namespace "csi-mock-volumes-6510" Mar 25 17:20:07.310: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rk542" to be fully deleted Mar 25 17:20:09.950: INFO: >>> kubeConfig: /root/.kube/config I0325 17:20:10.064670 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1b7c6d6c-4f48-4e71-8667-af68c4bef24d/volumes/kubernetes.io~csi/pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a/mount"},"Response":{},"Error":"","FullError":null} I0325 17:20:10.149494 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 17:20:10.152580 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a/globalmount"},"Response":{},"Error":"","FullError":null} I0325 17:20:17.377304 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 17:20:18.327: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268315", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031b4420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031b4438)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003175720), VolumeMode:(*v1.PersistentVolumeMode)(0xc003175730), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.327: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268318", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003212b88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003212ba0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003212bb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003212bd0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0036b2ec0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036b2ed0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.328: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268319", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6510", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003421008)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003421020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003421038)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003421050), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003421068)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003462910), VolumeMode:(*v1.PersistentVolumeMode)(0xc003462920), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.328: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268326", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6510", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003421098), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034210b0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034210c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034210e0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034210f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003421110)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a", StorageClassName:(*string)(0xc003462950), VolumeMode:(*v1.PersistentVolumeMode)(0xc003462960), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.328: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268328", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6510", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213ba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213bd8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213c08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213c20)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213c38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213c50)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a", StorageClassName:(*string)(0xc0036b3050), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036b3060), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.328: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268375", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003421140), DeletionGracePeriodSeconds:(*int64)(0xc003941448), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6510", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003421158), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003421170)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003421188), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034211a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034211b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034211d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a", StorageClassName:(*string)(0xc003462990), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034629a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:20:18.328: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-rmt97", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6510", SelfLink:"", UID:"743f0529-5454-4b61-b09f-0576d3f7ab9a", ResourceVersion:"1268376", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752289601, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003213cc8), DeletionGracePeriodSeconds:(*int64)(0xc0035b1ff8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6510", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213db8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213dd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213de8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003213e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003213e18)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-743f0529-5454-4b61-b09f-0576d3f7ab9a", StorageClassName:(*string)(0xc0036b30f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036b3100), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-rk542 Mar 25 17:20:18.328: INFO: Deleting pod "pvc-volume-tester-rk542" in namespace "csi-mock-volumes-6510" STEP: Deleting claim pvc-rmt97 STEP: Deleting storageclass csi-mock-volumes-6510-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6510 STEP: Waiting for namespaces [csi-mock-volumes-6510] to vanish STEP: uninstalling csi mock driver Mar 25 17:20:24.370: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-attacher Mar 25 17:20:24.399: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6510 Mar 25 17:20:24.432: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6510 Mar 25 17:20:24.439: INFO: deleting *v1.Role: csi-mock-volumes-6510-8077/external-attacher-cfg-csi-mock-volumes-6510 Mar 25 17:20:24.446: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-attacher-role-cfg Mar 25 17:20:24.451: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-provisioner Mar 25 17:20:24.469: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6510 Mar 25 17:20:24.479: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6510 Mar 25 17:20:24.487: INFO: deleting *v1.Role: csi-mock-volumes-6510-8077/external-provisioner-cfg-csi-mock-volumes-6510 Mar 25 17:20:24.493: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-provisioner-role-cfg Mar 25 17:20:24.581: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-resizer Mar 25 17:20:24.590: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6510 Mar 25 17:20:24.595: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6510 Mar 25 17:20:24.606: INFO: deleting *v1.Role: csi-mock-volumes-6510-8077/external-resizer-cfg-csi-mock-volumes-6510 Mar 25 17:20:24.613: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6510-8077/csi-resizer-role-cfg Mar 25 17:20:24.619: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-snapshotter Mar 25 17:20:24.630: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6510 Mar 25 17:20:24.637: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6510 Mar 25 17:20:24.666: INFO: deleting *v1.Role: csi-mock-volumes-6510-8077/external-snapshotter-leaderelection-csi-mock-volumes-6510 Mar 25 17:20:24.673: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6510-8077/external-snapshotter-leaderelection Mar 25 17:20:24.699: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6510-8077/csi-mock Mar 25 17:20:24.705: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6510 Mar 25 17:20:24.709: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6510 Mar 25 17:20:24.720: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6510 Mar 25 17:20:24.727: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6510 Mar 25 17:20:24.733: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6510 Mar 25 17:20:24.739: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6510 Mar 25 17:20:24.760: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6510 Mar 25 17:20:24.776: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6510-8077/csi-mockplugin Mar 25 17:20:24.787: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6510 STEP: deleting the driver namespace: csi-mock-volumes-6510-8077 STEP: Waiting for namespaces [csi-mock-volumes-6510-8077] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:21:08.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.029 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":115,"completed":1,"skipped":35,"failed":0} SSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:21:08.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 17:21:08.923: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:21:08.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3458" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 17:21:08.933: INFO: AfterEach: Cleaning up test resources Mar 25 17:21:08.933: INFO: pvc is nil Mar 25 17:21:08.933: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.096 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:21:08.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd" Mar 25 17:21:13.111: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd && dd if=/dev/zero of=/tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd/file] Namespace:persistent-local-volumes-test-9186 PodName:hostexec-latest-worker-fm4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:13.111: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:21:13.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9186 PodName:hostexec-latest-worker-fm4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:13.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:21:13.449: INFO: Creating a PV followed by a PVC Mar 25 17:21:13.462: INFO: Waiting for PV local-pvn4ggp to bind to PVC pvc-jdzcg Mar 25 17:21:13.462: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jdzcg] to have phase Bound Mar 25 17:21:13.482: INFO: PersistentVolumeClaim pvc-jdzcg found but phase is Pending instead of Bound. Mar 25 17:21:15.487: INFO: PersistentVolumeClaim pvc-jdzcg found but phase is Pending instead of Bound. Mar 25 17:21:17.573: INFO: PersistentVolumeClaim pvc-jdzcg found but phase is Pending instead of Bound. Mar 25 17:21:19.626: INFO: PersistentVolumeClaim pvc-jdzcg found and phase=Bound (6.164650118s) Mar 25 17:21:19.626: INFO: Waiting up to 3m0s for PersistentVolume local-pvn4ggp to have phase Bound Mar 25 17:21:19.631: INFO: PersistentVolume local-pvn4ggp found and phase=Bound (4.348202ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:21:44.675: INFO: pod "pod-9c7644ce-2219-4145-9f0a-fbf705f3e8af" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:21:44.675: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9186 PodName:pod-9c7644ce-2219-4145-9f0a-fbf705f3e8af ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:21:44.676: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:21:45.453: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000058 seconds, 303.1KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 17:21:45.453: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-9186 PodName:pod-9c7644ce-2219-4145-9f0a-fbf705f3e8af ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:21:45.453: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:21:45.745: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-9c7644ce-2219-4145-9f0a-fbf705f3e8af in namespace persistent-local-volumes-test-9186 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:21:45.820: INFO: Deleting PersistentVolumeClaim "pvc-jdzcg" Mar 25 17:21:46.123: INFO: Deleting PersistentVolume "local-pvn4ggp" Mar 25 17:21:46.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9186 PodName:hostexec-latest-worker-fm4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:46.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd/file Mar 25 17:21:46.717: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9186 PodName:hostexec-latest-worker-fm4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:46.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd Mar 25 17:21:46.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-042f82c1-cca4-4b7e-bc82-9e8147ffe5dd] Namespace:persistent-local-volumes-test-9186 PodName:hostexec-latest-worker-fm4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:46.854: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:21:47.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9186" for this suite. • [SLOW TEST:38.714 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:21:47.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:21:55.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend && mount --bind /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend && ln -s /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4] Namespace:persistent-local-volumes-test-5936 PodName:hostexec-latest-worker-ckpcm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:21:55.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:21:55.621: INFO: Creating a PV followed by a PVC Mar 25 17:21:55.941: INFO: Waiting for PV local-pvgdck6 to bind to PVC pvc-rjs7p Mar 25 17:21:55.941: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rjs7p] to have phase Bound Mar 25 17:21:56.305: INFO: PersistentVolumeClaim pvc-rjs7p found but phase is Pending instead of Bound. Mar 25 17:21:58.334: INFO: PersistentVolumeClaim pvc-rjs7p found and phase=Bound (2.392200586s) Mar 25 17:21:58.334: INFO: Waiting up to 3m0s for PersistentVolume local-pvgdck6 to have phase Bound Mar 25 17:21:58.495: INFO: PersistentVolume local-pvgdck6 found and phase=Bound (161.762974ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 17:22:08.770: INFO: pod "pod-dc4294cd-c213-499b-b1ea-b61965ab4ab5" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:22:08.770: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5936 PodName:pod-dc4294cd-c213-499b-b1ea-b61965ab4ab5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:22:08.770: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:22:09.350: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:22:09.350: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5936 PodName:pod-dc4294cd-c213-499b-b1ea-b61965ab4ab5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:22:09.350: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:22:09.476: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 17:22:13.559: INFO: pod "pod-447ebd53-ab7b-43e1-b52b-cd514293ddaf" created on Node "latest-worker" Mar 25 17:22:13.559: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5936 PodName:pod-447ebd53-ab7b-43e1-b52b-cd514293ddaf ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:22:13.559: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:22:13.659: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 17:22:13.659: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5936 PodName:pod-447ebd53-ab7b-43e1-b52b-cd514293ddaf ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:22:13.659: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:22:13.750: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 17:22:13.750: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5936 PodName:pod-dc4294cd-c213-499b-b1ea-b61965ab4ab5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:22:13.750: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:22:13.869: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-dc4294cd-c213-499b-b1ea-b61965ab4ab5 in namespace persistent-local-volumes-test-5936 STEP: Deleting pod2 STEP: Deleting pod pod-447ebd53-ab7b-43e1-b52b-cd514293ddaf in namespace persistent-local-volumes-test-5936 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:22:13.944: INFO: Deleting PersistentVolumeClaim "pvc-rjs7p" Mar 25 17:22:13.960: INFO: Deleting PersistentVolume "local-pvgdck6" STEP: Removing the test directory Mar 25 17:22:13.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4 && umount /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend && rm -r /tmp/local-volume-test-08e65dc6-91ff-437b-aad5-d881fc0901c4-backend] Namespace:persistent-local-volumes-test-5936 PodName:hostexec-latest-worker-ckpcm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:22:13.971: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:22:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5936" for this suite. • [SLOW TEST:26.910 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":3,"skipped":209,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:22:14.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 17:22:45.125: INFO: Deleting pod "pv-3071"/"pod-ephm-test-projected-klwf" Mar 25 17:22:45.125: INFO: Deleting pod "pod-ephm-test-projected-klwf" in namespace "pv-3071" Mar 25 17:22:45.132: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-klwf" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:22:59.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3071" for this suite. • [SLOW TEST:44.628 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":115,"completed":4,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:22:59.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 17:23:29.409: INFO: Deleting pod "pv-2035"/"pod-ephm-test-projected-kxqn" Mar 25 17:23:29.409: INFO: Deleting pod "pod-ephm-test-projected-kxqn" in namespace "pv-2035" Mar 25 17:23:29.415: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-kxqn" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:23:35.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2035" for this suite. • [SLOW TEST:36.259 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":115,"completed":5,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:23:35.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-235286b7-e250-4db4-880c-547b8595fb9c STEP: Creating a pod to test consume configMaps Mar 25 17:23:35.541: INFO: Waiting up to 5m0s for pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1" in namespace "configmap-5389" to be "Succeeded or Failed" Mar 25 17:23:35.546: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.331242ms Mar 25 17:23:37.602: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061659842s Mar 25 17:23:39.607: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066244173s Mar 25 17:23:41.613: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1": Phase="Running", Reason="", readiness=true. Elapsed: 6.07197552s Mar 25 17:23:43.617: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076446511s STEP: Saw pod success Mar 25 17:23:43.617: INFO: Pod "pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1" satisfied condition "Succeeded or Failed" Mar 25 17:23:43.621: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1 container agnhost-container: STEP: delete the pod Mar 25 17:23:43.644: INFO: Waiting for pod pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1 to disappear Mar 25 17:23:43.648: INFO: Pod pod-configmaps-28ba8c2e-454a-400c-9bb4-a02c542647a1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:23:43.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5389" for this suite. • [SLOW TEST:8.207 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":6,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:23:43.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec" Mar 25 17:23:47.859: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec && dd if=/dev/zero of=/tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec/file] Namespace:persistent-local-volumes-test-2902 PodName:hostexec-latest-worker-wtc5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:23:47.859: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:23:48.041: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2902 PodName:hostexec-latest-worker-wtc5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:23:48.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:23:48.154: INFO: Creating a PV followed by a PVC Mar 25 17:23:48.166: INFO: Waiting for PV local-pvckft6 to bind to PVC pvc-bc7pl Mar 25 17:23:48.166: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bc7pl] to have phase Bound Mar 25 17:23:48.172: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:23:50.176: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:23:52.180: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:23:54.185: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:23:56.190: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:23:58.196: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:24:00.201: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:24:02.206: INFO: PersistentVolumeClaim pvc-bc7pl found but phase is Pending instead of Bound. Mar 25 17:24:04.211: INFO: PersistentVolumeClaim pvc-bc7pl found and phase=Bound (16.044605802s) Mar 25 17:24:04.211: INFO: Waiting up to 3m0s for PersistentVolume local-pvckft6 to have phase Bound Mar 25 17:24:04.214: INFO: PersistentVolume local-pvckft6 found and phase=Bound (3.023266ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:24:08.241: INFO: pod "pod-be5aa7ca-f24c-4b3b-8012-e2dc584d3f1d" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:24:08.241: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2902 PodName:pod-be5aa7ca-f24c-4b3b-8012-e2dc584d3f1d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:24:08.241: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:08.343: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000087 seconds, 202.0KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 17:24:08.343: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-2902 PodName:pod-be5aa7ca-f24c-4b3b-8012-e2dc584d3f1d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:24:08.343: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:08.449: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Mar 25 17:24:08.449: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2902 PodName:pod-be5aa7ca-f24c-4b3b-8012-e2dc584d3f1d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:24:08.449: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:08.583: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000090 seconds, 119.4KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-be5aa7ca-f24c-4b3b-8012-e2dc584d3f1d in namespace persistent-local-volumes-test-2902 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:24:08.736: INFO: Deleting PersistentVolumeClaim "pvc-bc7pl" Mar 25 17:24:08.741: INFO: Deleting PersistentVolume "local-pvckft6" Mar 25 17:24:08.833: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2902 PodName:hostexec-latest-worker-wtc5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:08.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec/file Mar 25 17:24:08.958: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2902 PodName:hostexec-latest-worker-wtc5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:08.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec Mar 25 17:24:09.124: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6ddb50e4-efa4-4c85-97d4-47900a91d2ec] Namespace:persistent-local-volumes-test-2902 PodName:hostexec-latest-worker-wtc5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:09.124: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:24:09.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2902" for this suite. • [SLOW TEST:25.631 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":7,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:24:09.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 25 17:24:09.447: INFO: Waiting up to 5m0s for pod "pod-2fb71bb0-a531-4680-b223-169ed9ee2d56" in namespace "emptydir-7021" to be "Succeeded or Failed" Mar 25 17:24:09.494: INFO: Pod "pod-2fb71bb0-a531-4680-b223-169ed9ee2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 47.527701ms Mar 25 17:24:11.586: INFO: Pod "pod-2fb71bb0-a531-4680-b223-169ed9ee2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139505931s Mar 25 17:24:13.592: INFO: Pod "pod-2fb71bb0-a531-4680-b223-169ed9ee2d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144895239s STEP: Saw pod success Mar 25 17:24:13.592: INFO: Pod "pod-2fb71bb0-a531-4680-b223-169ed9ee2d56" satisfied condition "Succeeded or Failed" Mar 25 17:24:13.595: INFO: Trying to get logs from node latest-worker2 pod pod-2fb71bb0-a531-4680-b223-169ed9ee2d56 container test-container: STEP: delete the pod Mar 25 17:24:13.628: INFO: Waiting for pod pod-2fb71bb0-a531-4680-b223-169ed9ee2d56 to disappear Mar 25 17:24:13.639: INFO: Pod pod-2fb71bb0-a531-4680-b223-169ed9ee2d56 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:24:13.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7021" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":115,"completed":8,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:24:13.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Mar 25 17:24:13.768: INFO: Waiting up to 5m0s for pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1" in namespace "projected-2623" to be "Succeeded or Failed" Mar 25 17:24:13.796: INFO: Pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.412664ms Mar 25 17:24:15.832: INFO: Pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063699206s Mar 25 17:24:17.837: INFO: Pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068821483s Mar 25 17:24:19.841: INFO: Pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07313905s STEP: Saw pod success Mar 25 17:24:19.841: INFO: Pod "metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1" satisfied condition "Succeeded or Failed" Mar 25 17:24:19.844: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1 container client-container: STEP: delete the pod Mar 25 17:24:19.878: INFO: Waiting for pod metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1 to disappear Mar 25 17:24:19.888: INFO: Pod metadata-volume-5640cb6b-4617-4368-a4ca-dd62eb8a3ad1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:24:19.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2623" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":9,"skipped":469,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:24:19.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a" Mar 25 17:24:22.087: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a && dd if=/dev/zero of=/tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a/file] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:22.087: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:22.264: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:22.264: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:22.367: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a && chmod o+rwx /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:22.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:24:22.780: INFO: Creating a PV followed by a PVC Mar 25 17:24:22.812: INFO: Waiting for PV local-pv7bpmm to bind to PVC pvc-d85wr Mar 25 17:24:22.812: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-d85wr] to have phase Bound Mar 25 17:24:22.838: INFO: PersistentVolumeClaim pvc-d85wr found but phase is Pending instead of Bound. Mar 25 17:24:24.855: INFO: PersistentVolumeClaim pvc-d85wr found but phase is Pending instead of Bound. Mar 25 17:24:26.858: INFO: PersistentVolumeClaim pvc-d85wr found but phase is Pending instead of Bound. Mar 25 17:24:28.862: INFO: PersistentVolumeClaim pvc-d85wr found but phase is Pending instead of Bound. Mar 25 17:24:30.905: INFO: PersistentVolumeClaim pvc-d85wr found but phase is Pending instead of Bound. Mar 25 17:24:32.933: INFO: PersistentVolumeClaim pvc-d85wr found and phase=Bound (10.120994773s) Mar 25 17:24:32.933: INFO: Waiting up to 3m0s for PersistentVolume local-pv7bpmm to have phase Bound Mar 25 17:24:32.936: INFO: PersistentVolume local-pv7bpmm found and phase=Bound (2.992452ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:24:36.962: INFO: pod "pod-2d2f6ae2-ef0c-4552-86bb-38728c2e2e73" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:24:36.962: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1304 PodName:pod-2d2f6ae2-ef0c-4552-86bb-38728c2e2e73 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:24:36.962: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:37.085: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 17:24:37.085: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1304 PodName:pod-2d2f6ae2-ef0c-4552-86bb-38728c2e2e73 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:24:37.085: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:37.173: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2d2f6ae2-ef0c-4552-86bb-38728c2e2e73 in namespace persistent-local-volumes-test-1304 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:24:37.179: INFO: Deleting PersistentVolumeClaim "pvc-d85wr" Mar 25 17:24:37.191: INFO: Deleting PersistentVolume "local-pv7bpmm" Mar 25 17:24:37.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:37.223: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:24:37.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:37.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a/file Mar 25 17:24:37.492: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:37.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a Mar 25 17:24:37.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-67d1820e-e165-47d6-8313-cb0c5d7ac93a] Namespace:persistent-local-volumes-test-1304 PodName:hostexec-latest-worker-vrgvk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:24:37.597: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:24:37.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1304" for this suite. • [SLOW TEST:17.849 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":10,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:24:37.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-8300 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:24:37.980: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-attacher Mar 25 17:24:37.983: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8300 Mar 25 17:24:37.983: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8300 Mar 25 17:24:38.003: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8300 Mar 25 17:24:38.015: INFO: creating *v1.Role: csi-mock-volumes-8300-1776/external-attacher-cfg-csi-mock-volumes-8300 Mar 25 17:24:38.083: INFO: creating *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-attacher-role-cfg Mar 25 17:24:38.087: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-provisioner Mar 25 17:24:38.093: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8300 Mar 25 17:24:38.093: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8300 Mar 25 17:24:38.113: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8300 Mar 25 17:24:38.129: INFO: creating *v1.Role: csi-mock-volumes-8300-1776/external-provisioner-cfg-csi-mock-volumes-8300 Mar 25 17:24:38.149: INFO: creating *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-provisioner-role-cfg Mar 25 17:24:38.166: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-resizer Mar 25 17:24:38.171: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8300 Mar 25 17:24:38.171: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8300 Mar 25 17:24:38.177: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8300 Mar 25 17:24:38.183: INFO: creating *v1.Role: csi-mock-volumes-8300-1776/external-resizer-cfg-csi-mock-volumes-8300 Mar 25 17:24:38.221: INFO: creating *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-resizer-role-cfg Mar 25 17:24:38.231: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-snapshotter Mar 25 17:24:38.248: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8300 Mar 25 17:24:38.248: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8300 Mar 25 17:24:38.267: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8300 Mar 25 17:24:38.273: INFO: creating *v1.Role: csi-mock-volumes-8300-1776/external-snapshotter-leaderelection-csi-mock-volumes-8300 Mar 25 17:24:38.279: INFO: creating *v1.RoleBinding: csi-mock-volumes-8300-1776/external-snapshotter-leaderelection Mar 25 17:24:38.309: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-mock Mar 25 17:24:38.340: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8300 Mar 25 17:24:38.359: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8300 Mar 25 17:24:38.375: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8300 Mar 25 17:24:38.395: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8300 Mar 25 17:24:38.407: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8300 Mar 25 17:24:38.413: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8300 Mar 25 17:24:38.478: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8300 Mar 25 17:24:38.482: INFO: creating *v1.StatefulSet: csi-mock-volumes-8300-1776/csi-mockplugin Mar 25 17:24:38.497: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8300 Mar 25 17:24:38.543: INFO: creating *v1.StatefulSet: csi-mock-volumes-8300-1776/csi-mockplugin-attacher Mar 25 17:24:38.573: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8300" Mar 25 17:24:38.623: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8300 to register on node latest-worker2 STEP: Creating pod Mar 25 17:24:48.190: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:24:48.200: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-fsv52] to have phase Bound Mar 25 17:24:48.217: INFO: PersistentVolumeClaim pvc-fsv52 found but phase is Pending instead of Bound. Mar 25 17:24:50.221: INFO: PersistentVolumeClaim pvc-fsv52 found and phase=Bound (2.021413202s) STEP: Deleting the previously created pod Mar 25 17:24:58.256: INFO: Deleting pod "pvc-volume-tester-6tfnr" in namespace "csi-mock-volumes-8300" Mar 25 17:24:58.269: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6tfnr" to be fully deleted STEP: Checking CSI driver logs Mar 25 17:25:56.309: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c2e82cbf-e18e-481d-98ab-8ab504d59fe9/volumes/kubernetes.io~csi/pvc-0a693720-5935-4014-a96e-c91ed5222d59/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-6tfnr Mar 25 17:25:56.309: INFO: Deleting pod "pvc-volume-tester-6tfnr" in namespace "csi-mock-volumes-8300" STEP: Deleting claim pvc-fsv52 Mar 25 17:25:56.318: INFO: Waiting up to 2m0s for PersistentVolume pvc-0a693720-5935-4014-a96e-c91ed5222d59 to get deleted Mar 25 17:25:56.323: INFO: PersistentVolume pvc-0a693720-5935-4014-a96e-c91ed5222d59 found and phase=Bound (4.657482ms) Mar 25 17:25:58.327: INFO: PersistentVolume pvc-0a693720-5935-4014-a96e-c91ed5222d59 was removed STEP: Deleting storageclass csi-mock-volumes-8300-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8300 STEP: Waiting for namespaces [csi-mock-volumes-8300] to vanish STEP: uninstalling csi mock driver Mar 25 17:26:04.348: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-attacher Mar 25 17:26:04.355: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8300 Mar 25 17:26:04.361: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8300 Mar 25 17:26:04.402: INFO: deleting *v1.Role: csi-mock-volumes-8300-1776/external-attacher-cfg-csi-mock-volumes-8300 Mar 25 17:26:04.409: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-attacher-role-cfg Mar 25 17:26:04.427: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-provisioner Mar 25 17:26:04.439: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8300 Mar 25 17:26:04.451: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8300 Mar 25 17:26:04.462: INFO: deleting *v1.Role: csi-mock-volumes-8300-1776/external-provisioner-cfg-csi-mock-volumes-8300 Mar 25 17:26:04.469: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-provisioner-role-cfg Mar 25 17:26:04.475: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-resizer Mar 25 17:26:04.481: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8300 Mar 25 17:26:04.487: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8300 Mar 25 17:26:04.510: INFO: deleting *v1.Role: csi-mock-volumes-8300-1776/external-resizer-cfg-csi-mock-volumes-8300 Mar 25 17:26:04.516: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8300-1776/csi-resizer-role-cfg Mar 25 17:26:04.536: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-snapshotter Mar 25 17:26:04.547: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8300 Mar 25 17:26:04.553: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8300 Mar 25 17:26:04.563: INFO: deleting *v1.Role: csi-mock-volumes-8300-1776/external-snapshotter-leaderelection-csi-mock-volumes-8300 Mar 25 17:26:04.570: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8300-1776/external-snapshotter-leaderelection Mar 25 17:26:04.577: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8300-1776/csi-mock Mar 25 17:26:04.585: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8300 Mar 25 17:26:04.590: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8300 Mar 25 17:26:04.599: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8300 Mar 25 17:26:04.649: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8300 Mar 25 17:26:04.661: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8300 Mar 25 17:26:04.666: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8300 Mar 25 17:26:04.672: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8300 Mar 25 17:26:04.678: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8300-1776/csi-mockplugin Mar 25 17:26:04.685: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8300 Mar 25 17:26:04.690: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8300-1776/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8300-1776 STEP: Waiting for namespaces [csi-mock-volumes-8300-1776] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:27:02.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:145.131 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":115,"completed":11,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:27:02.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-2323 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:27:03.205: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-attacher Mar 25 17:27:03.234: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2323 Mar 25 17:27:03.234: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2323 Mar 25 17:27:03.238: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2323 Mar 25 17:27:03.251: INFO: creating *v1.Role: csi-mock-volumes-2323-2167/external-attacher-cfg-csi-mock-volumes-2323 Mar 25 17:27:03.277: INFO: creating *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-attacher-role-cfg Mar 25 17:27:03.293: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-provisioner Mar 25 17:27:03.311: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2323 Mar 25 17:27:03.311: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2323 Mar 25 17:27:03.323: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2323 Mar 25 17:27:03.329: INFO: creating *v1.Role: csi-mock-volumes-2323-2167/external-provisioner-cfg-csi-mock-volumes-2323 Mar 25 17:27:03.360: INFO: creating *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-provisioner-role-cfg Mar 25 17:27:03.392: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-resizer Mar 25 17:27:03.425: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2323 Mar 25 17:27:03.425: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2323 Mar 25 17:27:03.431: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2323 Mar 25 17:27:03.437: INFO: creating *v1.Role: csi-mock-volumes-2323-2167/external-resizer-cfg-csi-mock-volumes-2323 Mar 25 17:27:03.451: INFO: creating *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-resizer-role-cfg Mar 25 17:27:03.504: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-snapshotter Mar 25 17:27:03.514: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2323 Mar 25 17:27:03.514: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2323 Mar 25 17:27:03.533: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2323 Mar 25 17:27:03.543: INFO: creating *v1.Role: csi-mock-volumes-2323-2167/external-snapshotter-leaderelection-csi-mock-volumes-2323 Mar 25 17:27:03.575: INFO: creating *v1.RoleBinding: csi-mock-volumes-2323-2167/external-snapshotter-leaderelection Mar 25 17:27:03.599: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-mock Mar 25 17:27:03.641: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2323 Mar 25 17:27:03.644: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2323 Mar 25 17:27:03.654: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2323 Mar 25 17:27:03.691: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2323 Mar 25 17:27:03.715: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2323 Mar 25 17:27:03.726: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2323 Mar 25 17:27:03.732: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2323 Mar 25 17:27:03.738: INFO: creating *v1.StatefulSet: csi-mock-volumes-2323-2167/csi-mockplugin Mar 25 17:27:03.797: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2323 Mar 25 17:27:03.805: INFO: creating *v1.StatefulSet: csi-mock-volumes-2323-2167/csi-mockplugin-attacher Mar 25 17:27:03.833: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2323" Mar 25 17:27:03.847: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2323 to register on node latest-worker2 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Mar 25 17:27:29.853: INFO: Error getting logs for pod inline-volume-84qmw: the server rejected our request for an unknown reason (get pods inline-volume-84qmw) Mar 25 17:27:30.279: INFO: Deleting pod "inline-volume-84qmw" in namespace "csi-mock-volumes-2323" Mar 25 17:27:30.800: INFO: Wait up to 5m0s for pod "inline-volume-84qmw" to be fully deleted STEP: Deleting the previously created pod Mar 25 17:27:33.787: INFO: Deleting pod "pvc-volume-tester-pf485" in namespace "csi-mock-volumes-2323" Mar 25 17:27:34.047: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pf485" to be fully deleted STEP: Checking CSI driver logs Mar 25 17:28:06.877: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 86b21113-3f52-4b18-8b1b-6b5e57ac8ba2 Mar 25 17:28:06.877: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Mar 25 17:28:06.877: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Mar 25 17:28:06.877: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-pf485 Mar 25 17:28:06.877: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2323 Mar 25 17:28:06.877: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-f0a6eb916fbeb6e3c434ae0ddc3d10466b83e531c090ee530774c86fe1b3fac4","target_path":"/var/lib/kubelet/pods/86b21113-3f52-4b18-8b1b-6b5e57ac8ba2/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-pf485 Mar 25 17:28:06.877: INFO: Deleting pod "pvc-volume-tester-pf485" in namespace "csi-mock-volumes-2323" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2323 STEP: Waiting for namespaces [csi-mock-volumes-2323] to vanish STEP: uninstalling csi mock driver Mar 25 17:28:12.889: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-attacher Mar 25 17:28:12.896: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2323 Mar 25 17:28:12.916: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2323 Mar 25 17:28:12.935: INFO: deleting *v1.Role: csi-mock-volumes-2323-2167/external-attacher-cfg-csi-mock-volumes-2323 Mar 25 17:28:12.946: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-attacher-role-cfg Mar 25 17:28:12.953: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-provisioner Mar 25 17:28:12.959: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2323 Mar 25 17:28:12.969: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2323 Mar 25 17:28:12.977: INFO: deleting *v1.Role: csi-mock-volumes-2323-2167/external-provisioner-cfg-csi-mock-volumes-2323 Mar 25 17:28:12.997: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-provisioner-role-cfg Mar 25 17:28:13.007: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-resizer Mar 25 17:28:13.023: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2323 Mar 25 17:28:13.042: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2323 Mar 25 17:28:13.054: INFO: deleting *v1.Role: csi-mock-volumes-2323-2167/external-resizer-cfg-csi-mock-volumes-2323 Mar 25 17:28:13.061: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2323-2167/csi-resizer-role-cfg Mar 25 17:28:13.067: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-snapshotter Mar 25 17:28:13.082: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2323 Mar 25 17:28:13.089: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2323 Mar 25 17:28:13.126: INFO: deleting *v1.Role: csi-mock-volumes-2323-2167/external-snapshotter-leaderelection-csi-mock-volumes-2323 Mar 25 17:28:13.132: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2323-2167/external-snapshotter-leaderelection Mar 25 17:28:13.138: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2323-2167/csi-mock Mar 25 17:28:13.145: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2323 Mar 25 17:28:13.156: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2323 Mar 25 17:28:13.162: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2323 Mar 25 17:28:13.168: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2323 Mar 25 17:28:13.174: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2323 Mar 25 17:28:13.180: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2323 Mar 25 17:28:13.186: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2323 Mar 25 17:28:13.205: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2323-2167/csi-mockplugin Mar 25 17:28:13.217: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2323 Mar 25 17:28:13.236: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2323-2167/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2323-2167 STEP: Waiting for namespaces [csi-mock-volumes-2323-2167] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:29:09.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:126.403 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":115,"completed":12,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:29:09.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558" Mar 25 17:29:13.469: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558 && dd if=/dev/zero of=/tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558/file] Namespace:persistent-local-volumes-test-1478 PodName:hostexec-latest-worker2-ngqqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:29:13.469: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:29:13.637: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1478 PodName:hostexec-latest-worker2-ngqqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:29:13.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:29:13.752: INFO: Creating a PV followed by a PVC Mar 25 17:29:13.772: INFO: Waiting for PV local-pvc7vqz to bind to PVC pvc-cqrr2 Mar 25 17:29:13.772: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-cqrr2] to have phase Bound Mar 25 17:29:13.852: INFO: PersistentVolumeClaim pvc-cqrr2 found but phase is Pending instead of Bound. Mar 25 17:29:15.857: INFO: PersistentVolumeClaim pvc-cqrr2 found and phase=Bound (2.085552312s) Mar 25 17:29:15.857: INFO: Waiting up to 3m0s for PersistentVolume local-pvc7vqz to have phase Bound Mar 25 17:29:15.860: INFO: PersistentVolume local-pvc7vqz found and phase=Bound (2.463092ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:29:21.910: INFO: pod "pod-079bfb31-23c5-4a38-b495-e10aceba8da3" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:29:21.910: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1478 PodName:pod-079bfb31-23c5-4a38-b495-e10aceba8da3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:29:21.910: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:29:22.037: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000067 seconds, 262.4KB/s", err: Mar 25 17:29:22.037: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1478 PodName:pod-079bfb31-23c5-4a38-b495-e10aceba8da3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:29:22.037: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:29:22.148: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-079bfb31-23c5-4a38-b495-e10aceba8da3 in namespace persistent-local-volumes-test-1478 STEP: Creating pod2 STEP: Creating a pod Mar 25 17:29:26.262: INFO: pod "pod-b0f3bf21-5153-47ef-abf8-93959f2864b4" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 17:29:26.262: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1478 PodName:pod-b0f3bf21-5153-47ef-abf8-93959f2864b4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:29:26.262: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:29:26.398: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-b0f3bf21-5153-47ef-abf8-93959f2864b4 in namespace persistent-local-volumes-test-1478 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:29:26.403: INFO: Deleting PersistentVolumeClaim "pvc-cqrr2" Mar 25 17:29:26.410: INFO: Deleting PersistentVolume "local-pvc7vqz" Mar 25 17:29:26.431: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1478 PodName:hostexec-latest-worker2-ngqqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:29:26.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558/file Mar 25 17:29:26.547: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1478 PodName:hostexec-latest-worker2-ngqqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:29:26.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558 Mar 25 17:29:26.650: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-84201254-6748-413b-b879-54dc17f2b558] Namespace:persistent-local-volumes-test-1478 PodName:hostexec-latest-worker2-ngqqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:29:26.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:29:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1478" for this suite. • [SLOW TEST:17.488 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":13,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:29:26.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-4720 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:29:27.053: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-attacher Mar 25 17:29:27.056: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4720 Mar 25 17:29:27.056: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4720 Mar 25 17:29:27.061: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4720 Mar 25 17:29:27.099: INFO: creating *v1.Role: csi-mock-volumes-4720-4750/external-attacher-cfg-csi-mock-volumes-4720 Mar 25 17:29:27.109: INFO: creating *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-attacher-role-cfg Mar 25 17:29:27.179: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-provisioner Mar 25 17:29:27.191: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4720 Mar 25 17:29:27.191: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4720 Mar 25 17:29:27.197: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4720 Mar 25 17:29:27.203: INFO: creating *v1.Role: csi-mock-volumes-4720-4750/external-provisioner-cfg-csi-mock-volumes-4720 Mar 25 17:29:27.209: INFO: creating *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-provisioner-role-cfg Mar 25 17:29:27.232: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-resizer Mar 25 17:29:27.256: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4720 Mar 25 17:29:27.257: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4720 Mar 25 17:29:27.269: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4720 Mar 25 17:29:27.301: INFO: creating *v1.Role: csi-mock-volumes-4720-4750/external-resizer-cfg-csi-mock-volumes-4720 Mar 25 17:29:27.315: INFO: creating *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-resizer-role-cfg Mar 25 17:29:27.329: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-snapshotter Mar 25 17:29:27.335: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4720 Mar 25 17:29:27.335: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4720 Mar 25 17:29:27.341: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4720 Mar 25 17:29:27.357: INFO: creating *v1.Role: csi-mock-volumes-4720-4750/external-snapshotter-leaderelection-csi-mock-volumes-4720 Mar 25 17:29:27.381: INFO: creating *v1.RoleBinding: csi-mock-volumes-4720-4750/external-snapshotter-leaderelection Mar 25 17:29:27.395: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-mock Mar 25 17:29:27.401: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4720 Mar 25 17:29:27.421: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4720 Mar 25 17:29:27.424: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4720 Mar 25 17:29:27.442: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4720 Mar 25 17:29:27.472: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4720 Mar 25 17:29:27.485: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4720 Mar 25 17:29:27.490: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4720 Mar 25 17:29:27.497: INFO: creating *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin Mar 25 17:29:27.504: INFO: creating *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin-attacher Mar 25 17:29:27.555: INFO: creating *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin-resizer Mar 25 17:29:27.622: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4720 to register on node latest-worker2 STEP: Creating pod Mar 25 17:29:37.542: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:29:37.548: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nv65n] to have phase Bound Mar 25 17:29:37.552: INFO: PersistentVolumeClaim pvc-nv65n found but phase is Pending instead of Bound. Mar 25 17:29:39.557: INFO: PersistentVolumeClaim pvc-nv65n found and phase=Bound (2.009015778s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-fmc8q Mar 25 17:31:05.644: INFO: Deleting pod "pvc-volume-tester-fmc8q" in namespace "csi-mock-volumes-4720" Mar 25 17:31:05.664: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fmc8q" to be fully deleted STEP: Deleting claim pvc-nv65n Mar 25 17:32:05.697: INFO: Waiting up to 2m0s for PersistentVolume pvc-df695bb3-0b23-4792-a70d-6a67b825ce04 to get deleted Mar 25 17:32:05.705: INFO: PersistentVolume pvc-df695bb3-0b23-4792-a70d-6a67b825ce04 found and phase=Bound (8.050222ms) Mar 25 17:32:07.710: INFO: PersistentVolume pvc-df695bb3-0b23-4792-a70d-6a67b825ce04 was removed STEP: Deleting storageclass csi-mock-volumes-4720-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4720 STEP: Waiting for namespaces [csi-mock-volumes-4720] to vanish STEP: uninstalling csi mock driver Mar 25 17:32:13.736: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-attacher Mar 25 17:32:13.741: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4720 Mar 25 17:32:13.749: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4720 Mar 25 17:32:13.760: INFO: deleting *v1.Role: csi-mock-volumes-4720-4750/external-attacher-cfg-csi-mock-volumes-4720 Mar 25 17:32:13.767: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-attacher-role-cfg Mar 25 17:32:13.773: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-provisioner Mar 25 17:32:13.789: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4720 Mar 25 17:32:13.810: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4720 Mar 25 17:32:13.851: INFO: deleting *v1.Role: csi-mock-volumes-4720-4750/external-provisioner-cfg-csi-mock-volumes-4720 Mar 25 17:32:13.864: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-provisioner-role-cfg Mar 25 17:32:13.870: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-resizer Mar 25 17:32:13.875: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4720 Mar 25 17:32:13.885: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4720 Mar 25 17:32:13.922: INFO: deleting *v1.Role: csi-mock-volumes-4720-4750/external-resizer-cfg-csi-mock-volumes-4720 Mar 25 17:32:13.930: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4720-4750/csi-resizer-role-cfg Mar 25 17:32:13.936: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-snapshotter Mar 25 17:32:13.941: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4720 Mar 25 17:32:13.947: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4720 Mar 25 17:32:13.971: INFO: deleting *v1.Role: csi-mock-volumes-4720-4750/external-snapshotter-leaderelection-csi-mock-volumes-4720 Mar 25 17:32:13.984: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4720-4750/external-snapshotter-leaderelection Mar 25 17:32:13.989: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4720-4750/csi-mock Mar 25 17:32:13.995: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4720 Mar 25 17:32:14.001: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4720 Mar 25 17:32:14.012: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4720 Mar 25 17:32:14.019: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4720 Mar 25 17:32:14.035: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4720 Mar 25 17:32:14.043: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4720 Mar 25 17:32:14.067: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4720 Mar 25 17:32:14.092: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin Mar 25 17:32:14.097: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin-attacher Mar 25 17:32:14.104: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4720-4750/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-4720-4750 STEP: Waiting for namespaces [csi-mock-volumes-4720-4750] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:33:10.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:223.352 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":115,"completed":14,"skipped":712,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:33:10.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-2a0df419-0b96-4bb8-a2cf-3b5a0e9ee2d8 STEP: Creating a pod to test consume configMaps Mar 25 17:33:10.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197" in namespace "configmap-3377" to be "Succeeded or Failed" Mar 25 17:33:10.240: INFO: Pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197": Phase="Pending", Reason="", readiness=false. Elapsed: 3.571226ms Mar 25 17:33:12.249: INFO: Pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012926503s Mar 25 17:33:14.254: INFO: Pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197": Phase="Running", Reason="", readiness=true. Elapsed: 4.01763461s Mar 25 17:33:16.259: INFO: Pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02315608s STEP: Saw pod success Mar 25 17:33:16.259: INFO: Pod "pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197" satisfied condition "Succeeded or Failed" Mar 25 17:33:16.263: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197 container agnhost-container: STEP: delete the pod Mar 25 17:33:16.347: INFO: Waiting for pod pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197 to disappear Mar 25 17:33:16.354: INFO: Pod pod-configmaps-f4d4b533-a7c3-4ad7-976f-27a33d443197 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:33:16.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3377" for this suite. • [SLOW TEST:6.238 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":15,"skipped":717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:33:16.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-2fb3d638-3dbd-47ca-b39d-daba4a7a0dd7 STEP: Creating a pod to test consume configMaps Mar 25 17:33:16.471: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec" in namespace "projected-7442" to be "Succeeded or Failed" Mar 25 17:33:16.474: INFO: Pod "pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543144ms Mar 25 17:33:18.480: INFO: Pod "pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008526323s Mar 25 17:33:20.485: INFO: Pod "pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01369631s STEP: Saw pod success Mar 25 17:33:20.485: INFO: Pod "pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec" satisfied condition "Succeeded or Failed" Mar 25 17:33:20.488: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec container agnhost-container: STEP: delete the pod Mar 25 17:33:20.520: INFO: Waiting for pod pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec to disappear Mar 25 17:33:20.527: INFO: Pod pod-projected-configmaps-de07fb57-8910-48ca-ba8c-40e122cf23ec no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:33:20.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7442" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":16,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:33:20.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-6016 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:33:21.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-attacher Mar 25 17:33:21.052: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6016 Mar 25 17:33:21.052: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6016 Mar 25 17:33:21.055: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6016 Mar 25 17:33:21.073: INFO: creating *v1.Role: csi-mock-volumes-6016-1888/external-attacher-cfg-csi-mock-volumes-6016 Mar 25 17:33:21.101: INFO: creating *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-attacher-role-cfg Mar 25 17:33:21.136: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-provisioner Mar 25 17:33:21.152: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6016 Mar 25 17:33:21.152: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6016 Mar 25 17:33:21.165: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6016 Mar 25 17:33:21.168: INFO: creating *v1.Role: csi-mock-volumes-6016-1888/external-provisioner-cfg-csi-mock-volumes-6016 Mar 25 17:33:21.181: INFO: creating *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-provisioner-role-cfg Mar 25 17:33:21.215: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-resizer Mar 25 17:33:21.238: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6016 Mar 25 17:33:21.238: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6016 Mar 25 17:33:21.252: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6016 Mar 25 17:33:21.258: INFO: creating *v1.Role: csi-mock-volumes-6016-1888/external-resizer-cfg-csi-mock-volumes-6016 Mar 25 17:33:21.264: INFO: creating *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-resizer-role-cfg Mar 25 17:33:21.304: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-snapshotter Mar 25 17:33:21.336: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6016 Mar 25 17:33:21.336: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6016 Mar 25 17:33:21.342: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6016 Mar 25 17:33:21.348: INFO: creating *v1.Role: csi-mock-volumes-6016-1888/external-snapshotter-leaderelection-csi-mock-volumes-6016 Mar 25 17:33:21.364: INFO: creating *v1.RoleBinding: csi-mock-volumes-6016-1888/external-snapshotter-leaderelection Mar 25 17:33:21.395: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-mock Mar 25 17:33:21.429: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6016 Mar 25 17:33:21.434: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6016 Mar 25 17:33:21.439: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6016 Mar 25 17:33:21.443: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6016 Mar 25 17:33:21.466: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6016 Mar 25 17:33:21.496: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6016 Mar 25 17:33:21.522: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6016 Mar 25 17:33:21.528: INFO: creating *v1.StatefulSet: csi-mock-volumes-6016-1888/csi-mockplugin Mar 25 17:33:21.555: INFO: creating *v1.StatefulSet: csi-mock-volumes-6016-1888/csi-mockplugin-attacher Mar 25 17:33:21.587: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6016 to register on node latest-worker STEP: Creating pod Mar 25 17:33:31.236: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:33:31.257: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-q46mm] to have phase Bound Mar 25 17:33:31.265: INFO: PersistentVolumeClaim pvc-q46mm found but phase is Pending instead of Bound. Mar 25 17:33:33.271: INFO: PersistentVolumeClaim pvc-q46mm found and phase=Bound (2.013549684s) STEP: Deleting the previously created pod Mar 25 17:33:39.311: INFO: Deleting pod "pvc-volume-tester-5pv9l" in namespace "csi-mock-volumes-6016" Mar 25 17:33:39.317: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5pv9l" to be fully deleted STEP: Checking CSI driver logs Mar 25 17:34:47.388: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ab97d922-28b0-427d-8646-4d31deb18d99/volumes/kubernetes.io~csi/pvc-f9892cab-5d35-4598-b4d6-d8d3262bc15e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-5pv9l Mar 25 17:34:47.388: INFO: Deleting pod "pvc-volume-tester-5pv9l" in namespace "csi-mock-volumes-6016" STEP: Deleting claim pvc-q46mm Mar 25 17:34:47.397: INFO: Waiting up to 2m0s for PersistentVolume pvc-f9892cab-5d35-4598-b4d6-d8d3262bc15e to get deleted Mar 25 17:34:47.404: INFO: PersistentVolume pvc-f9892cab-5d35-4598-b4d6-d8d3262bc15e found and phase=Bound (6.984388ms) Mar 25 17:34:49.409: INFO: PersistentVolume pvc-f9892cab-5d35-4598-b4d6-d8d3262bc15e was removed STEP: Deleting storageclass csi-mock-volumes-6016-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6016 STEP: Waiting for namespaces [csi-mock-volumes-6016] to vanish STEP: uninstalling csi mock driver Mar 25 17:34:55.430: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-attacher Mar 25 17:34:55.436: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6016 Mar 25 17:34:55.445: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6016 Mar 25 17:34:55.454: INFO: deleting *v1.Role: csi-mock-volumes-6016-1888/external-attacher-cfg-csi-mock-volumes-6016 Mar 25 17:34:55.460: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-attacher-role-cfg Mar 25 17:34:55.472: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-provisioner Mar 25 17:34:55.490: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6016 Mar 25 17:34:55.497: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6016 Mar 25 17:34:55.507: INFO: deleting *v1.Role: csi-mock-volumes-6016-1888/external-provisioner-cfg-csi-mock-volumes-6016 Mar 25 17:34:55.563: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-provisioner-role-cfg Mar 25 17:34:55.648: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-resizer Mar 25 17:34:55.757: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6016 Mar 25 17:34:55.809: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6016 Mar 25 17:34:55.820: INFO: deleting *v1.Role: csi-mock-volumes-6016-1888/external-resizer-cfg-csi-mock-volumes-6016 Mar 25 17:34:55.825: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6016-1888/csi-resizer-role-cfg Mar 25 17:34:55.966: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-snapshotter Mar 25 17:34:56.034: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6016 Mar 25 17:34:56.048: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6016 Mar 25 17:34:56.059: INFO: deleting *v1.Role: csi-mock-volumes-6016-1888/external-snapshotter-leaderelection-csi-mock-volumes-6016 Mar 25 17:34:56.085: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6016-1888/external-snapshotter-leaderelection Mar 25 17:34:56.102: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6016-1888/csi-mock Mar 25 17:34:56.120: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6016 Mar 25 17:34:56.129: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6016 Mar 25 17:34:56.137: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6016 Mar 25 17:34:56.143: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6016 Mar 25 17:34:56.169: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6016 Mar 25 17:34:56.197: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6016 Mar 25 17:34:56.208: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6016 Mar 25 17:34:56.215: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6016-1888/csi-mockplugin Mar 25 17:34:56.220: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6016-1888/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6016-1888 STEP: Waiting for namespaces [csi-mock-volumes-6016-1888] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:35:48.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:147.707 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":115,"completed":17,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:35:48.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:35:52.359: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2f09b711-cec8-4867-8171-9094d760d691] Namespace:persistent-local-volumes-test-5337 PodName:hostexec-latest-worker-mr58w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:35:52.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:35:52.497: INFO: Creating a PV followed by a PVC Mar 25 17:35:52.509: INFO: Waiting for PV local-pvbrk88 to bind to PVC pvc-2hl5n Mar 25 17:35:52.509: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2hl5n] to have phase Bound Mar 25 17:35:52.531: INFO: PersistentVolumeClaim pvc-2hl5n found but phase is Pending instead of Bound. Mar 25 17:35:54.539: INFO: PersistentVolumeClaim pvc-2hl5n found and phase=Bound (2.029267241s) Mar 25 17:35:54.539: INFO: Waiting up to 3m0s for PersistentVolume local-pvbrk88 to have phase Bound Mar 25 17:35:54.541: INFO: PersistentVolume local-pvbrk88 found and phase=Bound (2.733953ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:35:58.587: INFO: pod "pod-999444eb-13ac-43d9-804f-951260588ff0" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:35:58.587: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5337 PodName:pod-999444eb-13ac-43d9-804f-951260588ff0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:35:58.587: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:35:58.718: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:35:58.718: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5337 PodName:pod-999444eb-13ac-43d9-804f-951260588ff0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:35:58.718: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:35:58.814: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-999444eb-13ac-43d9-804f-951260588ff0 in namespace persistent-local-volumes-test-5337 STEP: Creating pod2 STEP: Creating a pod Mar 25 17:36:04.858: INFO: pod "pod-d20d76a1-6b7c-4f0e-909b-14ac0e7cec22" created on Node "latest-worker" STEP: Reading in pod2 Mar 25 17:36:04.858: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5337 PodName:pod-d20d76a1-6b7c-4f0e-909b-14ac0e7cec22 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:04.858: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:04.965: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d20d76a1-6b7c-4f0e-909b-14ac0e7cec22 in namespace persistent-local-volumes-test-5337 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:36:05.044: INFO: Deleting PersistentVolumeClaim "pvc-2hl5n" Mar 25 17:36:05.098: INFO: Deleting PersistentVolume "local-pvbrk88" STEP: Removing the test directory Mar 25 17:36:05.732: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2f09b711-cec8-4867-8171-9094d760d691] Namespace:persistent-local-volumes-test-5337 PodName:hostexec-latest-worker-mr58w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:05.732: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:05.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5337" for this suite. • [SLOW TEST:17.694 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":18,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:05.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 25 17:36:07.340: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 25 17:36:07.385: INFO: Waiting up to 30s for PersistentVolume hostpath-whgvj to have phase Available Mar 25 17:36:07.424: INFO: PersistentVolume hostpath-whgvj found and phase=Available (38.143846ms) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Mar 25 17:36:07.430: INFO: Waiting up to 3m0s for PersistentVolume hostpath-whgvj to get deleted Mar 25 17:36:07.587: INFO: PersistentVolume hostpath-whgvj found and phase=Available (157.222194ms) Mar 25 17:36:09.591: INFO: PersistentVolume hostpath-whgvj was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:09.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-7940" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 25 17:36:09.599: INFO: AfterEach: Cleaning up test resources. Mar 25 17:36:09.599: INFO: pvc is nil Mar 25 17:36:09.599: INFO: Deleting PersistentVolume "hostpath-whgvj" •{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":115,"completed":19,"skipped":891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:09.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad" Mar 25 17:36:15.317: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad" "/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad"] Namespace:persistent-local-volumes-test-9549 PodName:hostexec-latest-worker-dgbk7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:15.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:36:15.439: INFO: Creating a PV followed by a PVC Mar 25 17:36:15.491: INFO: Waiting for PV local-pvkxv9v to bind to PVC pvc-8lbxq Mar 25 17:36:15.491: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8lbxq] to have phase Bound Mar 25 17:36:15.514: INFO: PersistentVolumeClaim pvc-8lbxq found but phase is Pending instead of Bound. Mar 25 17:36:17.519: INFO: PersistentVolumeClaim pvc-8lbxq found and phase=Bound (2.027199309s) Mar 25 17:36:17.519: INFO: Waiting up to 3m0s for PersistentVolume local-pvkxv9v to have phase Bound Mar 25 17:36:17.521: INFO: PersistentVolume local-pvkxv9v found and phase=Bound (2.438188ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:36:24.551: INFO: pod "pod-cd359581-1398-4df1-86b5-2e5842bb5e93" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:36:24.551: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9549 PodName:pod-cd359581-1398-4df1-86b5-2e5842bb5e93 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:24.551: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:24.682: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 17:36:24.682: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9549 PodName:pod-cd359581-1398-4df1-86b5-2e5842bb5e93 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:24.682: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:24.904: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-cd359581-1398-4df1-86b5-2e5842bb5e93 in namespace persistent-local-volumes-test-9549 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:36:24.908: INFO: Deleting PersistentVolumeClaim "pvc-8lbxq" Mar 25 17:36:24.924: INFO: Deleting PersistentVolume "local-pvkxv9v" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad" Mar 25 17:36:24.958: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad"] Namespace:persistent-local-volumes-test-9549 PodName:hostexec-latest-worker-dgbk7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:24.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 17:36:25.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-02fd9625-cfcf-42a7-b3b9-b240df1e41ad] Namespace:persistent-local-volumes-test-9549 PodName:hostexec-latest-worker-dgbk7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:25.115: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:25.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9549" for this suite. • [SLOW TEST:15.891 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":20,"skipped":953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:25.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 17:36:26.673: INFO: Waiting up to 5m0s for pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31" in namespace "emptydir-8952" to be "Succeeded or Failed" Mar 25 17:36:26.791: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31": Phase="Pending", Reason="", readiness=false. Elapsed: 117.678568ms Mar 25 17:36:28.826: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153042744s Mar 25 17:36:30.933: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259286256s Mar 25 17:36:33.097: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423860929s Mar 25 17:36:35.103: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.429896157s STEP: Saw pod success Mar 25 17:36:35.103: INFO: Pod "pod-3ab27643-8461-46c8-93f0-0c8817288b31" satisfied condition "Succeeded or Failed" Mar 25 17:36:35.106: INFO: Trying to get logs from node latest-worker2 pod pod-3ab27643-8461-46c8-93f0-0c8817288b31 container test-container: STEP: delete the pod Mar 25 17:36:35.199: INFO: Waiting for pod pod-3ab27643-8461-46c8-93f0-0c8817288b31 to disappear Mar 25 17:36:35.212: INFO: Pod pod-3ab27643-8461-46c8-93f0-0c8817288b31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:35.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8952" for this suite. • [SLOW TEST:9.702 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":115,"completed":21,"skipped":1000,"failed":0} SSSSS ------------------------------ [sig-storage] Pod Disks should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:35.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Mar 25 17:36:35.294: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:35.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1071" for this suite. S [SKIPPING] [0.110 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:450 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:35.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:36:39.587: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend && mount --bind /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend && ln -s /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe] Namespace:persistent-local-volumes-test-5462 PodName:hostexec-latest-worker2-8mcnd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:39.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:36:39.717: INFO: Creating a PV followed by a PVC Mar 25 17:36:39.744: INFO: Waiting for PV local-pvsqrhp to bind to PVC pvc-mnqvd Mar 25 17:36:39.745: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mnqvd] to have phase Bound Mar 25 17:36:39.772: INFO: PersistentVolumeClaim pvc-mnqvd found but phase is Pending instead of Bound. Mar 25 17:36:41.777: INFO: PersistentVolumeClaim pvc-mnqvd found and phase=Bound (2.032625222s) Mar 25 17:36:41.777: INFO: Waiting up to 3m0s for PersistentVolume local-pvsqrhp to have phase Bound Mar 25 17:36:41.781: INFO: PersistentVolume local-pvsqrhp found and phase=Bound (3.522626ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:36:45.810: INFO: pod "pod-d3933716-a0fd-4e83-9826-81c37145278e" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:36:45.810: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-d3933716-a0fd-4e83-9826-81c37145278e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:45.810: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:45.926: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:36:45.926: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-d3933716-a0fd-4e83-9826-81c37145278e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:45.926: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:46.014: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d3933716-a0fd-4e83-9826-81c37145278e in namespace persistent-local-volumes-test-5462 STEP: Creating pod2 STEP: Creating a pod Mar 25 17:36:50.054: INFO: pod "pod-da8c6a76-4ab0-4da2-9b0f-e582df63e3c9" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 17:36:50.054: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-da8c6a76-4ab0-4da2-9b0f-e582df63e3c9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:36:50.054: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:36:50.138: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-da8c6a76-4ab0-4da2-9b0f-e582df63e3c9 in namespace persistent-local-volumes-test-5462 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:36:50.144: INFO: Deleting PersistentVolumeClaim "pvc-mnqvd" Mar 25 17:36:50.223: INFO: Deleting PersistentVolume "local-pvsqrhp" STEP: Removing the test directory Mar 25 17:36:50.232: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe && umount /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend && rm -r /tmp/local-volume-test-3c238601-deb6-4bd8-aeac-dd24b825effe-backend] Namespace:persistent-local-volumes-test-5462 PodName:hostexec-latest-worker2-8mcnd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:36:50.233: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:36:50.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5462" for this suite. • [SLOW TEST:15.080 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":22,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:36:50.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-wlh6 STEP: Failing liveness probe Mar 25 17:36:58.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=subpath-4931 exec pod-subpath-test-configmap-wlh6 --container test-container-volume-configmap-wlh6 -- /bin/sh -c rm /probe-volume/probe-file' Mar 25 17:37:01.796: INFO: stderr: "" Mar 25 17:37:01.796: INFO: stdout: "" Mar 25 17:37:01.796: INFO: Pod exec output: STEP: Waiting for container to restart Mar 25 17:37:01.800: INFO: Container test-container-subpath-configmap-wlh6, restarts: 0 Mar 25 17:37:11.808: INFO: Container test-container-subpath-configmap-wlh6, restarts: 1 Mar 25 17:37:11.808: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Mar 25 17:37:15.825: INFO: Container has restart count: 2 Mar 25 17:37:35.824: INFO: Container has restart count: 3 Mar 25 17:38:37.823: INFO: Container restart has stabilized Mar 25 17:38:37.823: INFO: Deleting pod "pod-subpath-test-configmap-wlh6" in namespace "subpath-4931" Mar 25 17:38:37.837: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-wlh6" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:39:45.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4931" for this suite. • [SLOW TEST:175.544 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":115,"completed":23,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:39:45.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:39:48.107: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-0f63743a-7f44-4840-98d0-b95b56dda665-backend && ln -s /tmp/local-volume-test-0f63743a-7f44-4840-98d0-b95b56dda665-backend /tmp/local-volume-test-0f63743a-7f44-4840-98d0-b95b56dda665] Namespace:persistent-local-volumes-test-4471 PodName:hostexec-latest-worker2-j9wxx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:39:48.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:39:48.222: INFO: Creating a PV followed by a PVC Mar 25 17:39:48.240: INFO: Waiting for PV local-pv7vwg6 to bind to PVC pvc-wwr5s Mar 25 17:39:48.240: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wwr5s] to have phase Bound Mar 25 17:39:48.288: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:39:50.292: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:39:52.297: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:39:54.302: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:39:56.307: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:39:58.312: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:40:00.316: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:40:02.321: INFO: PersistentVolumeClaim pvc-wwr5s found but phase is Pending instead of Bound. Mar 25 17:40:04.326: INFO: PersistentVolumeClaim pvc-wwr5s found and phase=Bound (16.085834277s) Mar 25 17:40:04.326: INFO: Waiting up to 3m0s for PersistentVolume local-pv7vwg6 to have phase Bound Mar 25 17:40:04.329: INFO: PersistentVolume local-pv7vwg6 found and phase=Bound (3.074191ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:40:08.361: INFO: pod "pod-af41e7e2-2687-467b-84a1-e152f8b99dba" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:40:08.361: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4471 PodName:pod-af41e7e2-2687-467b-84a1-e152f8b99dba ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:08.361: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:08.469: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:40:08.469: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4471 PodName:pod-af41e7e2-2687-467b-84a1-e152f8b99dba ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:08.469: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:08.554: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-af41e7e2-2687-467b-84a1-e152f8b99dba in namespace persistent-local-volumes-test-4471 STEP: Creating pod2 STEP: Creating a pod Mar 25 17:40:12.608: INFO: pod "pod-19f4c263-7c02-4076-b540-2e317d6fe6dc" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 17:40:12.608: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4471 PodName:pod-19f4c263-7c02-4076-b540-2e317d6fe6dc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:12.609: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:12.719: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-19f4c263-7c02-4076-b540-2e317d6fe6dc in namespace persistent-local-volumes-test-4471 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:40:12.724: INFO: Deleting PersistentVolumeClaim "pvc-wwr5s" Mar 25 17:40:12.733: INFO: Deleting PersistentVolume "local-pv7vwg6" STEP: Removing the test directory Mar 25 17:40:12.776: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0f63743a-7f44-4840-98d0-b95b56dda665 && rm -r /tmp/local-volume-test-0f63743a-7f44-4840-98d0-b95b56dda665-backend] Namespace:persistent-local-volumes-test-4471 PodName:hostexec-latest-worker2-j9wxx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:40:12.776: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:40:12.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4471" for this suite. • [SLOW TEST:27.003 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":24,"skipped":1170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:40:12.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 17:40:13.081: INFO: Waiting up to 5m0s for pod "pod-a546120c-d1b9-4dc3-beb3-927ec871d658" in namespace "emptydir-5059" to be "Succeeded or Failed" Mar 25 17:40:13.091: INFO: Pod "pod-a546120c-d1b9-4dc3-beb3-927ec871d658": Phase="Pending", Reason="", readiness=false. Elapsed: 9.2165ms Mar 25 17:40:15.181: INFO: Pod "pod-a546120c-d1b9-4dc3-beb3-927ec871d658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099844939s Mar 25 17:40:17.186: INFO: Pod "pod-a546120c-d1b9-4dc3-beb3-927ec871d658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104343507s STEP: Saw pod success Mar 25 17:40:17.186: INFO: Pod "pod-a546120c-d1b9-4dc3-beb3-927ec871d658" satisfied condition "Succeeded or Failed" Mar 25 17:40:17.189: INFO: Trying to get logs from node latest-worker pod pod-a546120c-d1b9-4dc3-beb3-927ec871d658 container test-container: STEP: delete the pod Mar 25 17:40:17.237: INFO: Waiting for pod pod-a546120c-d1b9-4dc3-beb3-927ec871d658 to disappear Mar 25 17:40:17.253: INFO: Pod pod-a546120c-d1b9-4dc3-beb3-927ec871d658 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:40:17.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5059" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":115,"completed":25,"skipped":1195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:40:17.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Mar 25 17:40:23.461: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a StorageClass volume-provisioning-5419-glusterdptest STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-5419 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-5419-glusterdptest,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Mar 25 17:40:23.490: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-pq6dt] to have phase Bound Mar 25 17:40:23.501: INFO: PersistentVolumeClaim pvc-pq6dt found but phase is Pending instead of Bound. Mar 25 17:40:25.506: INFO: PersistentVolumeClaim pvc-pq6dt found and phase=Bound (2.015295921s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-5419"/"pvc-pq6dt" STEP: deleting the claim's PV "pvc-e2430d41-2305-41d9-a913-6623b5e9a8b9" Mar 25 17:40:25.517: INFO: Waiting up to 20m0s for PersistentVolume pvc-e2430d41-2305-41d9-a913-6623b5e9a8b9 to get deleted Mar 25 17:40:25.545: INFO: PersistentVolume pvc-e2430d41-2305-41d9-a913-6623b5e9a8b9 found and phase=Bound (27.690259ms) Mar 25 17:40:30.548: INFO: PersistentVolume pvc-e2430d41-2305-41d9-a913-6623b5e9a8b9 was removed Mar 25 17:40:30.548: INFO: deleting claim "volume-provisioning-5419"/"pvc-pq6dt" Mar 25 17:40:30.551: INFO: deleting storage class volume-provisioning-5419-glusterdptest [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:40:30.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5419" for this suite. • [SLOW TEST:13.325 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":115,"completed":26,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:40:30.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:40:34.767: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b2d4feef-58c7-4893-bfdc-abae33720094 && mount --bind /tmp/local-volume-test-b2d4feef-58c7-4893-bfdc-abae33720094 /tmp/local-volume-test-b2d4feef-58c7-4893-bfdc-abae33720094] Namespace:persistent-local-volumes-test-2484 PodName:hostexec-latest-worker-22g7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:40:34.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:40:34.897: INFO: Creating a PV followed by a PVC Mar 25 17:40:34.906: INFO: Waiting for PV local-pvlj47d to bind to PVC pvc-5qdkb Mar 25 17:40:34.906: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-5qdkb] to have phase Bound Mar 25 17:40:34.911: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:36.915: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:38.919: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:40.922: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:42.927: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:44.931: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:46.936: INFO: PersistentVolumeClaim pvc-5qdkb found but phase is Pending instead of Bound. Mar 25 17:40:48.941: INFO: PersistentVolumeClaim pvc-5qdkb found and phase=Bound (14.035887042s) Mar 25 17:40:48.941: INFO: Waiting up to 3m0s for PersistentVolume local-pvlj47d to have phase Bound Mar 25 17:40:48.945: INFO: PersistentVolume local-pvlj47d found and phase=Bound (3.206084ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:40:52.982: INFO: pod "pod-dac46777-800f-46c9-a2b7-286eea0b2707" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:40:52.982: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2484 PodName:pod-dac46777-800f-46c9-a2b7-286eea0b2707 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:52.982: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:53.081: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:40:53.081: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2484 PodName:pod-dac46777-800f-46c9-a2b7-286eea0b2707 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:53.081: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:53.182: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-dac46777-800f-46c9-a2b7-286eea0b2707 in namespace persistent-local-volumes-test-2484 STEP: Creating pod2 STEP: Creating a pod Mar 25 17:40:57.223: INFO: pod "pod-4ae95613-41f7-494d-8629-b54d778c3f86" created on Node "latest-worker" STEP: Reading in pod2 Mar 25 17:40:57.223: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2484 PodName:pod-4ae95613-41f7-494d-8629-b54d778c3f86 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:40:57.223: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:40:57.351: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-4ae95613-41f7-494d-8629-b54d778c3f86 in namespace persistent-local-volumes-test-2484 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:40:57.357: INFO: Deleting PersistentVolumeClaim "pvc-5qdkb" Mar 25 17:40:57.366: INFO: Deleting PersistentVolume "local-pvlj47d" STEP: Removing the test directory Mar 25 17:40:57.383: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-b2d4feef-58c7-4893-bfdc-abae33720094 && rm -r /tmp/local-volume-test-b2d4feef-58c7-4893-bfdc-abae33720094] Namespace:persistent-local-volumes-test-2484 PodName:hostexec-latest-worker-22g7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:40:57.383: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:40:57.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2484" for this suite. • [SLOW TEST:26.967 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":27,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:40:57.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Mar 25 17:40:57.702: INFO: Waiting up to 5m0s for pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc" in namespace "downward-api-2460" to be "Succeeded or Failed" Mar 25 17:40:57.706: INFO: Pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469633ms Mar 25 17:40:59.739: INFO: Pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037049656s Mar 25 17:41:01.744: INFO: Pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041893148s Mar 25 17:41:03.853: INFO: Pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15137846s STEP: Saw pod success Mar 25 17:41:03.853: INFO: Pod "metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc" satisfied condition "Succeeded or Failed" Mar 25 17:41:03.867: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc container client-container: STEP: delete the pod Mar 25 17:41:04.010: INFO: Waiting for pod metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc to disappear Mar 25 17:41:04.014: INFO: Pod metadata-volume-a3c32ce3-e552-4d66-9678-09a93a6580dc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:41:04.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2460" for this suite. • [SLOW TEST:6.463 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":28,"skipped":1638,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:41:04.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Mar 25 17:41:04.130: INFO: Waiting up to 5m0s for pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12" in namespace "downward-api-7982" to be "Succeeded or Failed" Mar 25 17:41:04.146: INFO: Pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12": Phase="Pending", Reason="", readiness=false. Elapsed: 15.333909ms Mar 25 17:41:06.150: INFO: Pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020073082s Mar 25 17:41:08.170: INFO: Pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12": Phase="Running", Reason="", readiness=true. Elapsed: 4.039431445s Mar 25 17:41:10.173: INFO: Pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042668627s STEP: Saw pod success Mar 25 17:41:10.173: INFO: Pod "metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12" satisfied condition "Succeeded or Failed" Mar 25 17:41:10.176: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12 container client-container: STEP: delete the pod Mar 25 17:41:10.204: INFO: Waiting for pod metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12 to disappear Mar 25 17:41:10.230: INFO: Pod metadata-volume-d35111d1-c238-4333-8f64-a61f41742c12 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:41:10.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7982" for this suite. • [SLOW TEST:6.214 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":29,"skipped":1646,"failed":0} SSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:41:10.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-6806 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:41:10.471: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-attacher Mar 25 17:41:10.487: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6806 Mar 25 17:41:10.487: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6806 Mar 25 17:41:10.502: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6806 Mar 25 17:41:10.515: INFO: creating *v1.Role: csi-mock-volumes-6806-5137/external-attacher-cfg-csi-mock-volumes-6806 Mar 25 17:41:10.528: INFO: creating *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-attacher-role-cfg Mar 25 17:41:10.539: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-provisioner Mar 25 17:41:10.545: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6806 Mar 25 17:41:10.545: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6806 Mar 25 17:41:10.551: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6806 Mar 25 17:41:10.557: INFO: creating *v1.Role: csi-mock-volumes-6806-5137/external-provisioner-cfg-csi-mock-volumes-6806 Mar 25 17:41:10.643: INFO: creating *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-provisioner-role-cfg Mar 25 17:41:10.647: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-resizer Mar 25 17:41:10.671: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6806 Mar 25 17:41:10.671: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6806 Mar 25 17:41:10.677: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6806 Mar 25 17:41:10.682: INFO: creating *v1.Role: csi-mock-volumes-6806-5137/external-resizer-cfg-csi-mock-volumes-6806 Mar 25 17:41:10.718: INFO: creating *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-resizer-role-cfg Mar 25 17:41:10.793: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-snapshotter Mar 25 17:41:10.797: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6806 Mar 25 17:41:10.797: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6806 Mar 25 17:41:10.815: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6806 Mar 25 17:41:10.857: INFO: creating *v1.Role: csi-mock-volumes-6806-5137/external-snapshotter-leaderelection-csi-mock-volumes-6806 Mar 25 17:41:10.876: INFO: creating *v1.RoleBinding: csi-mock-volumes-6806-5137/external-snapshotter-leaderelection Mar 25 17:41:10.907: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-mock Mar 25 17:41:10.911: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6806 Mar 25 17:41:10.916: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6806 Mar 25 17:41:10.933: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6806 Mar 25 17:41:10.946: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6806 Mar 25 17:41:10.963: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6806 Mar 25 17:41:10.979: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6806 Mar 25 17:41:10.982: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6806 Mar 25 17:41:10.988: INFO: creating *v1.StatefulSet: csi-mock-volumes-6806-5137/csi-mockplugin Mar 25 17:41:10.994: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6806 Mar 25 17:41:11.041: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6806" Mar 25 17:41:11.079: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6806 to register on node latest-worker2 STEP: Creating pod with fsGroup Mar 25 17:41:25.674: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:41:25.685: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-wmstt] to have phase Bound Mar 25 17:41:25.689: INFO: PersistentVolumeClaim pvc-wmstt found but phase is Pending instead of Bound. Mar 25 17:41:27.694: INFO: PersistentVolumeClaim pvc-wmstt found and phase=Bound (2.00799974s) Mar 25 17:41:31.724: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-6806] Namespace:csi-mock-volumes-6806 PodName:pvc-volume-tester-pz845 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:41:31.725: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:41:31.837: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-6806/csi-mock-volumes-6806'; sync] Namespace:csi-mock-volumes-6806 PodName:pvc-volume-tester-pz845 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:41:31.837: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:42:10.949: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-6806/csi-mock-volumes-6806] Namespace:csi-mock-volumes-6806 PodName:pvc-volume-tester-pz845 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:42:10.949: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:42:11.282: INFO: pod csi-mock-volumes-6806/pvc-volume-tester-pz845 exec for cmd ls -l /mnt/test/csi-mock-volumes-6806/csi-mock-volumes-6806, stdout: -rw-r--r-- 1 root 11666 13 Mar 25 17:41 /mnt/test/csi-mock-volumes-6806/csi-mock-volumes-6806, stderr: Mar 25 17:42:11.282: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-6806] Namespace:csi-mock-volumes-6806 PodName:pvc-volume-tester-pz845 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:42:11.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-pz845 Mar 25 17:42:11.435: INFO: Deleting pod "pvc-volume-tester-pz845" in namespace "csi-mock-volumes-6806" Mar 25 17:42:11.494: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pz845" to be fully deleted STEP: Deleting claim pvc-wmstt Mar 25 17:43:15.567: INFO: Waiting up to 2m0s for PersistentVolume pvc-10af93f2-caa5-4d80-9b86-2db3a10df0f7 to get deleted Mar 25 17:43:15.579: INFO: PersistentVolume pvc-10af93f2-caa5-4d80-9b86-2db3a10df0f7 found and phase=Bound (12.281469ms) Mar 25 17:43:17.585: INFO: PersistentVolume pvc-10af93f2-caa5-4d80-9b86-2db3a10df0f7 was removed STEP: Deleting storageclass csi-mock-volumes-6806-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6806 STEP: Waiting for namespaces [csi-mock-volumes-6806] to vanish STEP: uninstalling csi mock driver Mar 25 17:43:23.615: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-attacher Mar 25 17:43:23.621: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6806 Mar 25 17:43:23.645: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6806 Mar 25 17:43:23.672: INFO: deleting *v1.Role: csi-mock-volumes-6806-5137/external-attacher-cfg-csi-mock-volumes-6806 Mar 25 17:43:23.688: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-attacher-role-cfg Mar 25 17:43:23.695: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-provisioner Mar 25 17:43:23.701: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6806 Mar 25 17:43:23.707: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6806 Mar 25 17:43:23.724: INFO: deleting *v1.Role: csi-mock-volumes-6806-5137/external-provisioner-cfg-csi-mock-volumes-6806 Mar 25 17:43:23.729: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-provisioner-role-cfg Mar 25 17:43:23.767: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-resizer Mar 25 17:43:23.774: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6806 Mar 25 17:43:23.779: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6806 Mar 25 17:43:23.789: INFO: deleting *v1.Role: csi-mock-volumes-6806-5137/external-resizer-cfg-csi-mock-volumes-6806 Mar 25 17:43:23.796: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6806-5137/csi-resizer-role-cfg Mar 25 17:43:23.802: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-snapshotter Mar 25 17:43:23.808: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6806 Mar 25 17:43:23.814: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6806 Mar 25 17:43:23.826: INFO: deleting *v1.Role: csi-mock-volumes-6806-5137/external-snapshotter-leaderelection-csi-mock-volumes-6806 Mar 25 17:43:23.833: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6806-5137/external-snapshotter-leaderelection Mar 25 17:43:23.850: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6806-5137/csi-mock Mar 25 17:43:23.863: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6806 Mar 25 17:43:23.880: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6806 Mar 25 17:43:23.891: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6806 Mar 25 17:43:23.898: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6806 Mar 25 17:43:23.905: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6806 Mar 25 17:43:23.910: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6806 Mar 25 17:43:23.916: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6806 Mar 25 17:43:23.923: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6806-5137/csi-mockplugin Mar 25 17:43:23.941: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6806 STEP: deleting the driver namespace: csi-mock-volumes-6806-5137 STEP: Waiting for namespaces [csi-mock-volumes-6806-5137] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:43:51.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:161.731 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":115,"completed":30,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:43:51.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:43:56.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c && mount --bind /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c] Namespace:persistent-local-volumes-test-6121 PodName:hostexec-latest-worker2-tb4gd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:43:56.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:43:56.334: INFO: Creating a PV followed by a PVC Mar 25 17:43:56.349: INFO: Waiting for PV local-pv7h6gb to bind to PVC pvc-8hpr2 Mar 25 17:43:56.349: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8hpr2] to have phase Bound Mar 25 17:43:56.358: INFO: PersistentVolumeClaim pvc-8hpr2 found but phase is Pending instead of Bound. Mar 25 17:43:58.364: INFO: PersistentVolumeClaim pvc-8hpr2 found but phase is Pending instead of Bound. Mar 25 17:44:00.369: INFO: PersistentVolumeClaim pvc-8hpr2 found but phase is Pending instead of Bound. Mar 25 17:44:02.374: INFO: PersistentVolumeClaim pvc-8hpr2 found but phase is Pending instead of Bound. Mar 25 17:44:04.379: INFO: PersistentVolumeClaim pvc-8hpr2 found and phase=Bound (8.029996868s) Mar 25 17:44:04.379: INFO: Waiting up to 3m0s for PersistentVolume local-pv7h6gb to have phase Bound Mar 25 17:44:04.387: INFO: PersistentVolume local-pv7h6gb found and phase=Bound (7.781735ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 17:44:08.453: INFO: pod "pod-98d7b6c6-b3f1-4606-93ec-bd460d8113be" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:44:08.454: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6121 PodName:pod-98d7b6c6-b3f1-4606-93ec-bd460d8113be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:44:08.454: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:44:08.593: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:44:08.593: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6121 PodName:pod-98d7b6c6-b3f1-4606-93ec-bd460d8113be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:44:08.594: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:44:08.681: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 17:44:12.706: INFO: pod "pod-1c3a28c2-1fe9-4ff9-b97c-a06598c9275d" created on Node "latest-worker2" Mar 25 17:44:12.706: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6121 PodName:pod-1c3a28c2-1fe9-4ff9-b97c-a06598c9275d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:44:12.706: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:44:12.836: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 17:44:12.836: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6121 PodName:pod-1c3a28c2-1fe9-4ff9-b97c-a06598c9275d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:44:12.836: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:44:12.953: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 17:44:12.953: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6121 PodName:pod-98d7b6c6-b3f1-4606-93ec-bd460d8113be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:44:12.954: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:44:13.056: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-98d7b6c6-b3f1-4606-93ec-bd460d8113be in namespace persistent-local-volumes-test-6121 STEP: Deleting pod2 STEP: Deleting pod pod-1c3a28c2-1fe9-4ff9-b97c-a06598c9275d in namespace persistent-local-volumes-test-6121 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:44:13.104: INFO: Deleting PersistentVolumeClaim "pvc-8hpr2" Mar 25 17:44:13.157: INFO: Deleting PersistentVolume "local-pv7h6gb" STEP: Removing the test directory Mar 25 17:44:13.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c && rm -r /tmp/local-volume-test-51fe41e7-9fc1-4930-82ff-080a5ff7627c] Namespace:persistent-local-volumes-test-6121 PodName:hostexec-latest-worker2-tb4gd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:44:13.198: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:44:13.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6121" for this suite. • [SLOW TEST:21.573 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":31,"skipped":1894,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:44:13.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4568 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:44:13.777: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-attacher Mar 25 17:44:13.781: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4568 Mar 25 17:44:13.781: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4568 Mar 25 17:44:13.957: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4568 Mar 25 17:44:13.961: INFO: creating *v1.Role: csi-mock-volumes-4568-9003/external-attacher-cfg-csi-mock-volumes-4568 Mar 25 17:44:14.031: INFO: creating *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-attacher-role-cfg Mar 25 17:44:14.054: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-provisioner Mar 25 17:44:14.094: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4568 Mar 25 17:44:14.094: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4568 Mar 25 17:44:14.102: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4568 Mar 25 17:44:14.107: INFO: creating *v1.Role: csi-mock-volumes-4568-9003/external-provisioner-cfg-csi-mock-volumes-4568 Mar 25 17:44:14.116: INFO: creating *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-provisioner-role-cfg Mar 25 17:44:14.138: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-resizer Mar 25 17:44:14.143: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4568 Mar 25 17:44:14.143: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4568 Mar 25 17:44:14.163: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4568 Mar 25 17:44:14.192: INFO: creating *v1.Role: csi-mock-volumes-4568-9003/external-resizer-cfg-csi-mock-volumes-4568 Mar 25 17:44:14.232: INFO: creating *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-resizer-role-cfg Mar 25 17:44:14.236: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-snapshotter Mar 25 17:44:14.239: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4568 Mar 25 17:44:14.239: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4568 Mar 25 17:44:14.245: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4568 Mar 25 17:44:14.251: INFO: creating *v1.Role: csi-mock-volumes-4568-9003/external-snapshotter-leaderelection-csi-mock-volumes-4568 Mar 25 17:44:14.272: INFO: creating *v1.RoleBinding: csi-mock-volumes-4568-9003/external-snapshotter-leaderelection Mar 25 17:44:14.282: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-mock Mar 25 17:44:14.294: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4568 Mar 25 17:44:14.311: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4568 Mar 25 17:44:14.317: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4568 Mar 25 17:44:14.323: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4568 Mar 25 17:44:14.330: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4568 Mar 25 17:44:14.380: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4568 Mar 25 17:44:14.397: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4568 Mar 25 17:44:14.426: INFO: creating *v1.StatefulSet: csi-mock-volumes-4568-9003/csi-mockplugin Mar 25 17:44:14.432: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4568 Mar 25 17:44:14.496: INFO: creating *v1.StatefulSet: csi-mock-volumes-4568-9003/csi-mockplugin-attacher Mar 25 17:44:14.518: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4568" Mar 25 17:44:14.577: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4568 to register on node latest-worker2 Mar 25 17:44:24.329: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-4568 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc000c245a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00331cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4568 STEP: Waiting for namespaces [csi-mock-volumes-4568] to vanish STEP: uninstalling csi mock driver Mar 25 17:44:30.342: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-attacher Mar 25 17:44:30.350: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4568 Mar 25 17:44:30.357: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4568 Mar 25 17:44:30.368: INFO: deleting *v1.Role: csi-mock-volumes-4568-9003/external-attacher-cfg-csi-mock-volumes-4568 Mar 25 17:44:30.375: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-attacher-role-cfg Mar 25 17:44:30.381: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-provisioner Mar 25 17:44:30.387: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4568 Mar 25 17:44:30.418: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4568 Mar 25 17:44:30.486: INFO: deleting *v1.Role: csi-mock-volumes-4568-9003/external-provisioner-cfg-csi-mock-volumes-4568 Mar 25 17:44:30.495: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-provisioner-role-cfg Mar 25 17:44:30.519: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-resizer Mar 25 17:44:30.537: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4568 Mar 25 17:44:30.603: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4568 Mar 25 17:44:30.607: INFO: deleting *v1.Role: csi-mock-volumes-4568-9003/external-resizer-cfg-csi-mock-volumes-4568 Mar 25 17:44:30.626: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4568-9003/csi-resizer-role-cfg Mar 25 17:44:30.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-snapshotter Mar 25 17:44:30.651: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4568 Mar 25 17:44:30.729: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4568 Mar 25 17:44:30.802: INFO: deleting *v1.Role: csi-mock-volumes-4568-9003/external-snapshotter-leaderelection-csi-mock-volumes-4568 Mar 25 17:44:30.818: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4568-9003/external-snapshotter-leaderelection Mar 25 17:44:30.825: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4568-9003/csi-mock Mar 25 17:44:31.207: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4568 Mar 25 17:44:31.226: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4568 Mar 25 17:44:31.300: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4568 Mar 25 17:44:31.309: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4568 Mar 25 17:44:31.334: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4568 Mar 25 17:44:31.356: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4568 Mar 25 17:44:31.363: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4568 Mar 25 17:44:31.426: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4568-9003/csi-mockplugin Mar 25 17:44:31.486: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4568 Mar 25 17:44:31.492: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4568-9003/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4568-9003 STEP: Waiting for namespaces [csi-mock-volumes-4568-9003] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:45:17.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [64.448 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 17:44:24.329: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-4568 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc000c245a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":115,"completed":31,"skipped":1904,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:45:17.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c" Mar 25 17:45:22.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c && dd if=/dev/zero of=/tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c/file] Namespace:persistent-local-volumes-test-807 PodName:hostexec-latest-worker2-m5x7p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:22.539: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:22.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-807 PodName:hostexec-latest-worker2-m5x7p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:22.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:45:22.975: INFO: Creating a PV followed by a PVC Mar 25 17:45:23.323: INFO: Waiting for PV local-pvm2pxv to bind to PVC pvc-jcfqp Mar 25 17:45:23.323: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jcfqp] to have phase Bound Mar 25 17:45:23.329: INFO: PersistentVolumeClaim pvc-jcfqp found but phase is Pending instead of Bound. Mar 25 17:45:25.334: INFO: PersistentVolumeClaim pvc-jcfqp found but phase is Pending instead of Bound. Mar 25 17:45:27.338: INFO: PersistentVolumeClaim pvc-jcfqp found but phase is Pending instead of Bound. Mar 25 17:45:29.342: INFO: PersistentVolumeClaim pvc-jcfqp found but phase is Pending instead of Bound. Mar 25 17:45:31.346: INFO: PersistentVolumeClaim pvc-jcfqp found but phase is Pending instead of Bound. Mar 25 17:45:33.350: INFO: PersistentVolumeClaim pvc-jcfqp found and phase=Bound (10.027054705s) Mar 25 17:45:33.350: INFO: Waiting up to 3m0s for PersistentVolume local-pvm2pxv to have phase Bound Mar 25 17:45:33.353: INFO: PersistentVolume local-pvm2pxv found and phase=Bound (2.418594ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:45:37.394: INFO: pod "pod-d54bd892-1853-499a-9335-dbef36780d88" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:45:37.394: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-807 PodName:pod-d54bd892-1853-499a-9335-dbef36780d88 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:45:37.394: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:37.522: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 17:45:37.522: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-807 PodName:pod-d54bd892-1853-499a-9335-dbef36780d88 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:45:37.522: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:37.635: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 17:45:37.635: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-807 PodName:pod-d54bd892-1853-499a-9335-dbef36780d88 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:45:37.635: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:37.746: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d54bd892-1853-499a-9335-dbef36780d88 in namespace persistent-local-volumes-test-807 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:45:37.752: INFO: Deleting PersistentVolumeClaim "pvc-jcfqp" Mar 25 17:45:37.771: INFO: Deleting PersistentVolume "local-pvm2pxv" Mar 25 17:45:37.795: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-807 PodName:hostexec-latest-worker2-m5x7p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:37.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c/file Mar 25 17:45:37.939: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-807 PodName:hostexec-latest-worker2-m5x7p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:37.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c Mar 25 17:45:38.049: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9cca916-d16e-4186-8b8e-872f9da48d4c] Namespace:persistent-local-volumes-test-807 PodName:hostexec-latest-worker2-m5x7p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:38.049: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:45:38.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-807" for this suite. • [SLOW TEST:20.159 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":32,"skipped":2041,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:45:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3" Mar 25 17:45:40.311: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3 && dd if=/dev/zero of=/tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3/file] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-latest-worker2-j9swz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:40.311: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:40.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-latest-worker2-j9swz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:40.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:45:40.550: INFO: Creating a PV followed by a PVC Mar 25 17:45:40.562: INFO: Waiting for PV local-pvjz2mx to bind to PVC pvc-tcrxk Mar 25 17:45:40.562: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tcrxk] to have phase Bound Mar 25 17:45:40.742: INFO: PersistentVolumeClaim pvc-tcrxk found but phase is Pending instead of Bound. Mar 25 17:45:42.747: INFO: PersistentVolumeClaim pvc-tcrxk found but phase is Pending instead of Bound. Mar 25 17:45:44.751: INFO: PersistentVolumeClaim pvc-tcrxk found but phase is Pending instead of Bound. Mar 25 17:45:46.755: INFO: PersistentVolumeClaim pvc-tcrxk found but phase is Pending instead of Bound. Mar 25 17:45:48.758: INFO: PersistentVolumeClaim pvc-tcrxk found and phase=Bound (8.196580628s) Mar 25 17:45:48.758: INFO: Waiting up to 3m0s for PersistentVolume local-pvjz2mx to have phase Bound Mar 25 17:45:48.761: INFO: PersistentVolume local-pvjz2mx found and phase=Bound (2.109654ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:45:52.785: INFO: pod "pod-658c9b91-f69a-4c89-9dca-a1487d8893fa" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 17:45:52.785: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6428 PodName:pod-658c9b91-f69a-4c89-9dca-a1487d8893fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:45:52.785: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:52.902: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 17:45:52.902: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6428 PodName:pod-658c9b91-f69a-4c89-9dca-a1487d8893fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:45:52.902: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:45:52.987: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-658c9b91-f69a-4c89-9dca-a1487d8893fa in namespace persistent-local-volumes-test-6428 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:45:52.992: INFO: Deleting PersistentVolumeClaim "pvc-tcrxk" Mar 25 17:45:53.020: INFO: Deleting PersistentVolume "local-pvjz2mx" Mar 25 17:45:53.047: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-latest-worker2-j9swz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:53.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "latest-worker2" at path /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3/file Mar 25 17:45:53.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-latest-worker2-j9swz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:53.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3 Mar 25 17:45:53.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0ed85a41-f32e-4178-b2a1-0766984a0bb3] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-latest-worker2-j9swz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:53.247: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:45:53.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6428" for this suite. • [SLOW TEST:15.244 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":33,"skipped":2070,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:45:53.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:45:57.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-94f4136f-a939-4dee-a584-2874457bc357] Namespace:persistent-local-volumes-test-3271 PodName:hostexec-latest-worker-wdxtl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:45:57.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:45:57.712: INFO: Creating a PV followed by a PVC Mar 25 17:45:57.724: INFO: Waiting for PV local-pv7q7mk to bind to PVC pvc-9s7mm Mar 25 17:45:57.724: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-9s7mm] to have phase Bound Mar 25 17:45:57.766: INFO: PersistentVolumeClaim pvc-9s7mm found but phase is Pending instead of Bound. Mar 25 17:45:59.771: INFO: PersistentVolumeClaim pvc-9s7mm found but phase is Pending instead of Bound. Mar 25 17:46:01.776: INFO: PersistentVolumeClaim pvc-9s7mm found but phase is Pending instead of Bound. Mar 25 17:46:03.781: INFO: PersistentVolumeClaim pvc-9s7mm found and phase=Bound (6.057128393s) Mar 25 17:46:03.781: INFO: Waiting up to 3m0s for PersistentVolume local-pv7q7mk to have phase Bound Mar 25 17:46:03.784: INFO: PersistentVolume local-pv7q7mk found and phase=Bound (2.887568ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 17:46:07.815: INFO: pod "pod-31397f34-da6d-4ddc-841e-49b92393c6e1" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:46:07.815: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3271 PodName:pod-31397f34-da6d-4ddc-841e-49b92393c6e1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:07.815: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:07.935: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 17:46:07.935: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3271 PodName:pod-31397f34-da6d-4ddc-841e-49b92393c6e1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:07.935: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:08.051: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 17:46:12.113: INFO: pod "pod-3bad1f6e-4dff-47c4-a3af-0f47b10af8d0" created on Node "latest-worker" Mar 25 17:46:12.113: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3271 PodName:pod-3bad1f6e-4dff-47c4-a3af-0f47b10af8d0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:12.113: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:12.243: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 17:46:12.243: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-94f4136f-a939-4dee-a584-2874457bc357 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3271 PodName:pod-3bad1f6e-4dff-47c4-a3af-0f47b10af8d0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:12.243: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:12.346: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-94f4136f-a939-4dee-a584-2874457bc357 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 17:46:12.346: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3271 PodName:pod-31397f34-da6d-4ddc-841e-49b92393c6e1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:12.346: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:12.455: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-94f4136f-a939-4dee-a584-2874457bc357", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-31397f34-da6d-4ddc-841e-49b92393c6e1 in namespace persistent-local-volumes-test-3271 STEP: Deleting pod2 STEP: Deleting pod pod-3bad1f6e-4dff-47c4-a3af-0f47b10af8d0 in namespace persistent-local-volumes-test-3271 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:46:12.497: INFO: Deleting PersistentVolumeClaim "pvc-9s7mm" Mar 25 17:46:12.517: INFO: Deleting PersistentVolume "local-pv7q7mk" STEP: Removing the test directory Mar 25 17:46:12.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-94f4136f-a939-4dee-a584-2874457bc357] Namespace:persistent-local-volumes-test-3271 PodName:hostexec-latest-worker-wdxtl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:46:12.533: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:46:12.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3271" for this suite. • [SLOW TEST:19.611 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":34,"skipped":2072,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes ConfigMap should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:46:13.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Mar 25 17:46:17.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=volume-7848 exec configmap-client --namespace=volume-7848 -- cat /opt/0/firstfile' Mar 25 17:46:17.592: INFO: stderr: "" Mar 25 17:46:17.592: INFO: stdout: "this is the first file" Mar 25 17:46:17.592: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-7848 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:17.592: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:17.682: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-7848 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:17.682: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:17.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=volume-7848 exec configmap-client --namespace=volume-7848 -- cat /opt/1/secondfile' Mar 25 17:46:18.004: INFO: stderr: "" Mar 25 17:46:18.004: INFO: stdout: "this is the second file" Mar 25 17:46:18.004: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-7848 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:18.005: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:46:18.098: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-7848 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:46:18.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-7848 Mar 25 17:46:18.247: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:18.275: INFO: Pod configmap-client still exists Mar 25 17:46:20.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:20.281: INFO: Pod configmap-client still exists Mar 25 17:46:22.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:22.280: INFO: Pod configmap-client still exists Mar 25 17:46:24.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:24.280: INFO: Pod configmap-client still exists Mar 25 17:46:26.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:26.280: INFO: Pod configmap-client still exists Mar 25 17:46:28.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:28.279: INFO: Pod configmap-client still exists Mar 25 17:46:30.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:30.278: INFO: Pod configmap-client still exists Mar 25 17:46:32.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:32.280: INFO: Pod configmap-client still exists Mar 25 17:46:34.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:34.280: INFO: Pod configmap-client still exists Mar 25 17:46:36.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:36.280: INFO: Pod configmap-client still exists Mar 25 17:46:38.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:38.281: INFO: Pod configmap-client still exists Mar 25 17:46:40.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:40.280: INFO: Pod configmap-client still exists Mar 25 17:46:42.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:42.280: INFO: Pod configmap-client still exists Mar 25 17:46:44.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:44.280: INFO: Pod configmap-client still exists Mar 25 17:46:46.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:46.280: INFO: Pod configmap-client still exists Mar 25 17:46:48.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:48.280: INFO: Pod configmap-client still exists Mar 25 17:46:50.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:50.279: INFO: Pod configmap-client still exists Mar 25 17:46:52.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:52.278: INFO: Pod configmap-client still exists Mar 25 17:46:54.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:54.280: INFO: Pod configmap-client still exists Mar 25 17:46:56.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:56.289: INFO: Pod configmap-client still exists Mar 25 17:46:58.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:46:58.295: INFO: Pod configmap-client still exists Mar 25 17:47:00.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:00.402: INFO: Pod configmap-client still exists Mar 25 17:47:02.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:02.279: INFO: Pod configmap-client still exists Mar 25 17:47:04.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:04.281: INFO: Pod configmap-client still exists Mar 25 17:47:06.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:06.301: INFO: Pod configmap-client still exists Mar 25 17:47:08.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:08.279: INFO: Pod configmap-client still exists Mar 25 17:47:10.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:10.280: INFO: Pod configmap-client still exists Mar 25 17:47:12.276: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:12.280: INFO: Pod configmap-client still exists Mar 25 17:47:14.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:14.279: INFO: Pod configmap-client still exists Mar 25 17:47:16.275: INFO: Waiting for pod configmap-client to disappear Mar 25 17:47:16.280: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:47:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-7848" for this suite. • [SLOW TEST:63.288 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":115,"completed":35,"skipped":2119,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:47:16.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-3764 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:47:16.554: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-attacher Mar 25 17:47:16.558: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3764 Mar 25 17:47:16.558: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3764 Mar 25 17:47:16.561: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3764 Mar 25 17:47:16.573: INFO: creating *v1.Role: csi-mock-volumes-3764-9801/external-attacher-cfg-csi-mock-volumes-3764 Mar 25 17:47:16.589: INFO: creating *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-attacher-role-cfg Mar 25 17:47:16.604: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-provisioner Mar 25 17:47:16.635: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3764 Mar 25 17:47:16.635: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3764 Mar 25 17:47:16.638: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3764 Mar 25 17:47:16.648: INFO: creating *v1.Role: csi-mock-volumes-3764-9801/external-provisioner-cfg-csi-mock-volumes-3764 Mar 25 17:47:16.654: INFO: creating *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-provisioner-role-cfg Mar 25 17:47:16.670: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-resizer Mar 25 17:47:16.685: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3764 Mar 25 17:47:16.685: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3764 Mar 25 17:47:16.700: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3764 Mar 25 17:47:16.714: INFO: creating *v1.Role: csi-mock-volumes-3764-9801/external-resizer-cfg-csi-mock-volumes-3764 Mar 25 17:47:16.785: INFO: creating *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-resizer-role-cfg Mar 25 17:47:16.792: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-snapshotter Mar 25 17:47:16.804: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3764 Mar 25 17:47:16.804: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3764 Mar 25 17:47:16.822: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3764 Mar 25 17:47:16.856: INFO: creating *v1.Role: csi-mock-volumes-3764-9801/external-snapshotter-leaderelection-csi-mock-volumes-3764 Mar 25 17:47:16.864: INFO: creating *v1.RoleBinding: csi-mock-volumes-3764-9801/external-snapshotter-leaderelection Mar 25 17:47:16.904: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-mock Mar 25 17:47:16.911: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3764 Mar 25 17:47:16.917: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3764 Mar 25 17:47:16.923: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3764 Mar 25 17:47:16.929: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3764 Mar 25 17:47:16.946: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3764 Mar 25 17:47:16.958: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3764 Mar 25 17:47:16.971: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3764 Mar 25 17:47:16.977: INFO: creating *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin Mar 25 17:47:16.984: INFO: creating *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin-attacher Mar 25 17:47:17.049: INFO: creating *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin-resizer Mar 25 17:47:17.060: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3764 to register on node latest-worker STEP: Creating pod Mar 25 17:47:26.799: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:47:26.843: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-vm6ds] to have phase Bound Mar 25 17:47:26.935: INFO: PersistentVolumeClaim pvc-vm6ds found but phase is Pending instead of Bound. Mar 25 17:47:28.952: INFO: PersistentVolumeClaim pvc-vm6ds found and phase=Bound (2.109342582s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 25 17:47:49.034: INFO: Deleting pod "pvc-volume-tester-wxm2x" in namespace "csi-mock-volumes-3764" Mar 25 17:47:49.039: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wxm2x" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-wxm2x Mar 25 17:48:13.091: INFO: Deleting pod "pvc-volume-tester-wxm2x" in namespace "csi-mock-volumes-3764" STEP: Deleting pod pvc-volume-tester-d8zh7 Mar 25 17:48:13.095: INFO: Deleting pod "pvc-volume-tester-d8zh7" in namespace "csi-mock-volumes-3764" Mar 25 17:48:13.101: INFO: Wait up to 5m0s for pod "pvc-volume-tester-d8zh7" to be fully deleted STEP: Deleting claim pvc-vm6ds Mar 25 17:48:57.142: INFO: Waiting up to 2m0s for PersistentVolume pvc-432f6a68-04d5-4df1-a5a3-0afc244db160 to get deleted Mar 25 17:48:57.173: INFO: PersistentVolume pvc-432f6a68-04d5-4df1-a5a3-0afc244db160 found and phase=Bound (30.36205ms) Mar 25 17:48:59.177: INFO: PersistentVolume pvc-432f6a68-04d5-4df1-a5a3-0afc244db160 was removed STEP: Deleting storageclass csi-mock-volumes-3764-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3764 STEP: Waiting for namespaces [csi-mock-volumes-3764] to vanish STEP: uninstalling csi mock driver Mar 25 17:49:05.235: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-attacher Mar 25 17:49:05.241: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3764 Mar 25 17:49:05.258: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3764 Mar 25 17:49:05.270: INFO: deleting *v1.Role: csi-mock-volumes-3764-9801/external-attacher-cfg-csi-mock-volumes-3764 Mar 25 17:49:05.277: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-attacher-role-cfg Mar 25 17:49:05.300: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-provisioner Mar 25 17:49:05.306: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3764 Mar 25 17:49:05.328: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3764 Mar 25 17:49:05.357: INFO: deleting *v1.Role: csi-mock-volumes-3764-9801/external-provisioner-cfg-csi-mock-volumes-3764 Mar 25 17:49:05.367: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-provisioner-role-cfg Mar 25 17:49:05.372: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-resizer Mar 25 17:49:05.378: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3764 Mar 25 17:49:05.385: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3764 Mar 25 17:49:05.395: INFO: deleting *v1.Role: csi-mock-volumes-3764-9801/external-resizer-cfg-csi-mock-volumes-3764 Mar 25 17:49:05.402: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3764-9801/csi-resizer-role-cfg Mar 25 17:49:05.408: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-snapshotter Mar 25 17:49:05.413: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3764 Mar 25 17:49:05.474: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3764 Mar 25 17:49:05.479: INFO: deleting *v1.Role: csi-mock-volumes-3764-9801/external-snapshotter-leaderelection-csi-mock-volumes-3764 Mar 25 17:49:05.486: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3764-9801/external-snapshotter-leaderelection Mar 25 17:49:05.492: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3764-9801/csi-mock Mar 25 17:49:05.498: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3764 Mar 25 17:49:05.503: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3764 Mar 25 17:49:05.515: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3764 Mar 25 17:49:05.521: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3764 Mar 25 17:49:05.538: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3764 Mar 25 17:49:05.552: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3764 Mar 25 17:49:05.558: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3764 Mar 25 17:49:05.563: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin Mar 25 17:49:05.569: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin-attacher Mar 25 17:49:05.589: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3764-9801/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3764-9801 STEP: Waiting for namespaces [csi-mock-volumes-3764-9801] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:01.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:165.317 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":115,"completed":36,"skipped":2493,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:01.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Mar 25 17:50:01.703: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3626" to be "Succeeded or Failed" Mar 25 17:50:01.738: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 34.840044ms Mar 25 17:50:03.805: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101399058s Mar 25 17:50:05.809: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105403139s Mar 25 17:50:07.813: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109068462s Mar 25 17:50:09.817: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113802328s STEP: Saw pod success Mar 25 17:50:09.817: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 17:50:09.821: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 25 17:50:09.867: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 17:50:09.894: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:09.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3626" for this suite. • [SLOW TEST:8.283 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":115,"completed":37,"skipped":2544,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:09.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:50:14.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend && mount --bind /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend && ln -s /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752] Namespace:persistent-local-volumes-test-2759 PodName:hostexec-latest-worker-dzsbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:14.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:50:14.469: INFO: Creating a PV followed by a PVC Mar 25 17:50:14.486: INFO: Waiting for PV local-pv9j4v7 to bind to PVC pvc-tjmw4 Mar 25 17:50:14.486: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tjmw4] to have phase Bound Mar 25 17:50:14.509: INFO: PersistentVolumeClaim pvc-tjmw4 found but phase is Pending instead of Bound. Mar 25 17:50:16.512: INFO: PersistentVolumeClaim pvc-tjmw4 found but phase is Pending instead of Bound. Mar 25 17:50:18.518: INFO: PersistentVolumeClaim pvc-tjmw4 found and phase=Bound (4.032279049s) Mar 25 17:50:18.518: INFO: Waiting up to 3m0s for PersistentVolume local-pv9j4v7 to have phase Bound Mar 25 17:50:18.521: INFO: PersistentVolume local-pv9j4v7 found and phase=Bound (3.465214ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 17:50:18.528: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:50:18.529: INFO: Deleting PersistentVolumeClaim "pvc-tjmw4" Mar 25 17:50:18.534: INFO: Deleting PersistentVolume "local-pv9j4v7" STEP: Removing the test directory Mar 25 17:50:18.579: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752 && umount /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend && rm -r /tmp/local-volume-test-ccd1507f-43ac-4e7b-a089-fd76d9cbf752-backend] Namespace:persistent-local-volumes-test-2759 PodName:hostexec-latest-worker-dzsbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:18.579: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:18.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2759" for this suite. S [SKIPPING] [8.831 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:18.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:50:20.927: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95-backend && ln -s /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95-backend /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95] Namespace:persistent-local-volumes-test-5580 PodName:hostexec-latest-worker-vs2q9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:20.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:50:21.017: INFO: Creating a PV followed by a PVC Mar 25 17:50:21.031: INFO: Waiting for PV local-pv2wtcf to bind to PVC pvc-hqdg6 Mar 25 17:50:21.031: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-hqdg6] to have phase Bound Mar 25 17:50:21.049: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:23.055: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:25.059: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:27.063: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:29.080: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:31.085: INFO: PersistentVolumeClaim pvc-hqdg6 found but phase is Pending instead of Bound. Mar 25 17:50:33.091: INFO: PersistentVolumeClaim pvc-hqdg6 found and phase=Bound (12.05969391s) Mar 25 17:50:33.091: INFO: Waiting up to 3m0s for PersistentVolume local-pv2wtcf to have phase Bound Mar 25 17:50:33.095: INFO: PersistentVolume local-pv2wtcf found and phase=Bound (3.65485ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:50:37.144: INFO: pod "pod-03150551-58e4-4aa6-9f42-40751f47d7f1" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:50:37.144: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5580 PodName:pod-03150551-58e4-4aa6-9f42-40751f47d7f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:50:37.144: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:50:37.255: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 17:50:37.255: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5580 PodName:pod-03150551-58e4-4aa6-9f42-40751f47d7f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:50:37.255: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:50:37.353: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 17:50:37.353: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5580 PodName:pod-03150551-58e4-4aa6-9f42-40751f47d7f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:50:37.353: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:50:37.465: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-03150551-58e4-4aa6-9f42-40751f47d7f1 in namespace persistent-local-volumes-test-5580 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:50:37.470: INFO: Deleting PersistentVolumeClaim "pvc-hqdg6" Mar 25 17:50:37.491: INFO: Deleting PersistentVolume "local-pv2wtcf" STEP: Removing the test directory Mar 25 17:50:37.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95 && rm -r /tmp/local-volume-test-a312bf9b-bbae-41dd-aba0-04c162f21f95-backend] Namespace:persistent-local-volumes-test-5580 PodName:hostexec-latest-worker-vs2q9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:37.507: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:37.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5580" for this suite. • [SLOW TEST:18.904 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":38,"skipped":2646,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:37.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071" Mar 25 17:50:41.787: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071" "/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071"] Namespace:persistent-local-volumes-test-7700 PodName:hostexec-latest-worker2-2lhv2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:41.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:50:41.901: INFO: Creating a PV followed by a PVC Mar 25 17:50:41.911: INFO: Waiting for PV local-pv97fjh to bind to PVC pvc-zspf4 Mar 25 17:50:41.911: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zspf4] to have phase Bound Mar 25 17:50:41.917: INFO: PersistentVolumeClaim pvc-zspf4 found but phase is Pending instead of Bound. Mar 25 17:50:44.069: INFO: PersistentVolumeClaim pvc-zspf4 found but phase is Pending instead of Bound. Mar 25 17:50:46.074: INFO: PersistentVolumeClaim pvc-zspf4 found but phase is Pending instead of Bound. Mar 25 17:50:48.079: INFO: PersistentVolumeClaim pvc-zspf4 found and phase=Bound (6.167331706s) Mar 25 17:50:48.079: INFO: Waiting up to 3m0s for PersistentVolume local-pv97fjh to have phase Bound Mar 25 17:50:48.082: INFO: PersistentVolume local-pv97fjh found and phase=Bound (2.834351ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 17:50:48.086: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:50:48.087: INFO: Deleting PersistentVolumeClaim "pvc-zspf4" Mar 25 17:50:48.091: INFO: Deleting PersistentVolume "local-pv97fjh" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071" Mar 25 17:50:48.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071"] Namespace:persistent-local-volumes-test-7700 PodName:hostexec-latest-worker2-2lhv2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:48.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 17:50:48.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-529d2102-26c0-4635-adba-8709ca182071] Namespace:persistent-local-volumes-test-7700 PodName:hostexec-latest-worker2-2lhv2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:48.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:48.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7700" for this suite. S [SKIPPING] [10.829 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:48.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:50:52.734: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-06135ec7-c081-43a5-9e03-3292a095fb3b-backend && ln -s /tmp/local-volume-test-06135ec7-c081-43a5-9e03-3292a095fb3b-backend /tmp/local-volume-test-06135ec7-c081-43a5-9e03-3292a095fb3b] Namespace:persistent-local-volumes-test-8645 PodName:hostexec-latest-worker2-9x6r8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:52.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:50:53.025: INFO: Creating a PV followed by a PVC Mar 25 17:50:53.053: INFO: Waiting for PV local-pvznjnr to bind to PVC pvc-56c4c Mar 25 17:50:53.053: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-56c4c] to have phase Bound Mar 25 17:50:53.068: INFO: PersistentVolumeClaim pvc-56c4c found but phase is Pending instead of Bound. Mar 25 17:50:55.151: INFO: PersistentVolumeClaim pvc-56c4c found and phase=Bound (2.098708179s) Mar 25 17:50:55.151: INFO: Waiting up to 3m0s for PersistentVolume local-pvznjnr to have phase Bound Mar 25 17:50:55.154: INFO: PersistentVolume local-pvznjnr found and phase=Bound (2.853511ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 17:50:55.158: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:50:55.159: INFO: Deleting PersistentVolumeClaim "pvc-56c4c" Mar 25 17:50:55.194: INFO: Deleting PersistentVolume "local-pvznjnr" STEP: Removing the test directory Mar 25 17:50:55.220: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-06135ec7-c081-43a5-9e03-3292a095fb3b && rm -r /tmp/local-volume-test-06135ec7-c081-43a5-9e03-3292a095fb3b-backend] Namespace:persistent-local-volumes-test-8645 PodName:hostexec-latest-worker2-9x6r8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:55.220: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:55.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8645" for this suite. S [SKIPPING] [6.977 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:55.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:55.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9718" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":115,"completed":39,"skipped":2865,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:55.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:50:58.227: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-96b9ae6c-ff13-460c-b57d-8962867d237b] Namespace:persistent-local-volumes-test-8921 PodName:hostexec-latest-worker-6phb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:50:58.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:50:58.370: INFO: Creating a PV followed by a PVC Mar 25 17:50:58.381: INFO: Waiting for PV local-pvdh5jx to bind to PVC pvc-ppkqt Mar 25 17:50:58.381: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ppkqt] to have phase Bound Mar 25 17:50:58.406: INFO: PersistentVolumeClaim pvc-ppkqt found but phase is Pending instead of Bound. Mar 25 17:51:00.411: INFO: PersistentVolumeClaim pvc-ppkqt found but phase is Pending instead of Bound. Mar 25 17:51:02.416: INFO: PersistentVolumeClaim pvc-ppkqt found but phase is Pending instead of Bound. Mar 25 17:51:04.420: INFO: PersistentVolumeClaim pvc-ppkqt found and phase=Bound (6.038828105s) Mar 25 17:51:04.420: INFO: Waiting up to 3m0s for PersistentVolume local-pvdh5jx to have phase Bound Mar 25 17:51:04.422: INFO: PersistentVolume local-pvdh5jx found and phase=Bound (2.368036ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 17:51:04.427: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:51:04.428: INFO: Deleting PersistentVolumeClaim "pvc-ppkqt" Mar 25 17:51:04.432: INFO: Deleting PersistentVolume "local-pvdh5jx" STEP: Removing the test directory Mar 25 17:51:04.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-96b9ae6c-ff13-460c-b57d-8962867d237b] Namespace:persistent-local-volumes-test-8921 PodName:hostexec-latest-worker-6phb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:51:04.447: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:51:04.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8921" for this suite. S [SKIPPING] [9.515 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:51:05.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4195 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:51:05.342: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-attacher Mar 25 17:51:05.346: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4195 Mar 25 17:51:05.346: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4195 Mar 25 17:51:05.363: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4195 Mar 25 17:51:05.373: INFO: creating *v1.Role: csi-mock-volumes-4195-3587/external-attacher-cfg-csi-mock-volumes-4195 Mar 25 17:51:05.378: INFO: creating *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-attacher-role-cfg Mar 25 17:51:05.393: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-provisioner Mar 25 17:51:05.420: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4195 Mar 25 17:51:05.420: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4195 Mar 25 17:51:05.452: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4195 Mar 25 17:51:05.462: INFO: creating *v1.Role: csi-mock-volumes-4195-3587/external-provisioner-cfg-csi-mock-volumes-4195 Mar 25 17:51:05.477: INFO: creating *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-provisioner-role-cfg Mar 25 17:51:05.492: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-resizer Mar 25 17:51:05.507: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4195 Mar 25 17:51:05.507: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4195 Mar 25 17:51:05.513: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4195 Mar 25 17:51:05.518: INFO: creating *v1.Role: csi-mock-volumes-4195-3587/external-resizer-cfg-csi-mock-volumes-4195 Mar 25 17:51:05.525: INFO: creating *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-resizer-role-cfg Mar 25 17:51:05.541: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-snapshotter Mar 25 17:51:05.590: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4195 Mar 25 17:51:05.590: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4195 Mar 25 17:51:05.615: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4195 Mar 25 17:51:05.633: INFO: creating *v1.Role: csi-mock-volumes-4195-3587/external-snapshotter-leaderelection-csi-mock-volumes-4195 Mar 25 17:51:05.667: INFO: creating *v1.RoleBinding: csi-mock-volumes-4195-3587/external-snapshotter-leaderelection Mar 25 17:51:05.681: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-mock Mar 25 17:51:05.686: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4195 Mar 25 17:51:05.721: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4195 Mar 25 17:51:05.725: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4195 Mar 25 17:51:05.740: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4195 Mar 25 17:51:05.785: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4195 Mar 25 17:51:05.813: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4195 Mar 25 17:51:05.847: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4195 Mar 25 17:51:05.850: INFO: creating *v1.StatefulSet: csi-mock-volumes-4195-3587/csi-mockplugin Mar 25 17:51:05.921: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4195 Mar 25 17:51:06.021: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4195" Mar 25 17:51:06.026: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4195 to register on node latest-worker2 STEP: Creating pod Mar 25 17:51:22.460: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:51:22.484: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-lc2cr] to have phase Bound Mar 25 17:51:22.489: INFO: PersistentVolumeClaim pvc-lc2cr found but phase is Pending instead of Bound. Mar 25 17:51:24.511: INFO: PersistentVolumeClaim pvc-lc2cr found and phase=Bound (2.02760463s) Mar 25 17:51:28.535: INFO: Deleting pod "pvc-volume-tester-2klzs" in namespace "csi-mock-volumes-4195" Mar 25 17:51:28.540: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2klzs" to be fully deleted STEP: Checking PVC events Mar 25 17:52:17.574: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280179", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f47de8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f47e00)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002336c00), VolumeMode:(*v1.PersistentVolumeMode)(0xc002336c10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:52:17.574: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280180", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4195"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97338)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97350), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97368)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0022f5940), VolumeMode:(*v1.PersistentVolumeMode)(0xc0022f5960), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:52:17.574: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280186", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4195"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ff0a38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ff0a50)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ff0a68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ff0a80)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1ef78c65-6163-4d79-bf6e-049ea159e115", StorageClassName:(*string)(0xc00094d060), VolumeMode:(*v1.PersistentVolumeMode)(0xc00094d070), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:52:17.574: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280187", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4195"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420570), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420588)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034205a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034205b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1ef78c65-6163-4d79-bf6e-049ea159e115", StorageClassName:(*string)(0xc003d104d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d104e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:52:17.575: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280353", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc002ff0b28), DeletionGracePeriodSeconds:(*int64)(0xc0029e8bc8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4195"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ff0b58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ff0c18)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ff0c30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ff0c48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1ef78c65-6163-4d79-bf6e-049ea159e115", StorageClassName:(*string)(0xc00094d0d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00094d0e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:52:17.575: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lc2cr", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4195", SelfLink:"", UID:"1ef78c65-6163-4d79-bf6e-049ea159e115", ResourceVersion:"1280354", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291482, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc0017c0018), DeletionGracePeriodSeconds:(*int64)(0xc004e90028), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4195"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0017c0048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017c0060)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0017c0078), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017c0090)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1ef78c65-6163-4d79-bf6e-049ea159e115", StorageClassName:(*string)(0xc004074020), VolumeMode:(*v1.PersistentVolumeMode)(0xc004074030), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-2klzs Mar 25 17:52:17.575: INFO: Deleting pod "pvc-volume-tester-2klzs" in namespace "csi-mock-volumes-4195" STEP: Deleting claim pvc-lc2cr STEP: Deleting storageclass csi-mock-volumes-4195-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4195 STEP: Waiting for namespaces [csi-mock-volumes-4195] to vanish STEP: uninstalling csi mock driver Mar 25 17:52:25.825: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-attacher Mar 25 17:52:25.832: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4195 Mar 25 17:52:26.107: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4195 Mar 25 17:52:26.432: INFO: deleting *v1.Role: csi-mock-volumes-4195-3587/external-attacher-cfg-csi-mock-volumes-4195 Mar 25 17:52:26.452: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-attacher-role-cfg Mar 25 17:52:26.483: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-provisioner Mar 25 17:52:26.796: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4195 Mar 25 17:52:26.871: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4195 Mar 25 17:52:26.984: INFO: deleting *v1.Role: csi-mock-volumes-4195-3587/external-provisioner-cfg-csi-mock-volumes-4195 Mar 25 17:52:27.008: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-provisioner-role-cfg Mar 25 17:52:27.014: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-resizer Mar 25 17:52:27.034: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4195 Mar 25 17:52:27.050: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4195 Mar 25 17:52:27.127: INFO: deleting *v1.Role: csi-mock-volumes-4195-3587/external-resizer-cfg-csi-mock-volumes-4195 Mar 25 17:52:27.134: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4195-3587/csi-resizer-role-cfg Mar 25 17:52:27.140: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-snapshotter Mar 25 17:52:27.146: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4195 Mar 25 17:52:27.152: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4195 Mar 25 17:52:27.163: INFO: deleting *v1.Role: csi-mock-volumes-4195-3587/external-snapshotter-leaderelection-csi-mock-volumes-4195 Mar 25 17:52:27.170: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4195-3587/external-snapshotter-leaderelection Mar 25 17:52:27.216: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4195-3587/csi-mock Mar 25 17:52:27.238: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4195 Mar 25 17:52:27.244: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4195 Mar 25 17:52:27.755: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4195 Mar 25 17:52:27.770: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4195 Mar 25 17:52:27.775: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4195 Mar 25 17:52:27.811: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4195 Mar 25 17:52:27.830: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4195 Mar 25 17:52:27.835: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4195-3587/csi-mockplugin Mar 25 17:52:27.870: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4195 STEP: deleting the driver namespace: csi-mock-volumes-4195-3587 STEP: Waiting for namespaces [csi-mock-volumes-4195-3587] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:53:17.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:132.835 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":115,"completed":40,"skipped":3023,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:53:17.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Mar 25 17:53:18.015: INFO: Waiting up to 5m0s for pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06" in namespace "emptydir-7388" to be "Succeeded or Failed" Mar 25 17:53:18.020: INFO: Pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06": Phase="Pending", Reason="", readiness=false. Elapsed: 5.058464ms Mar 25 17:53:20.024: INFO: Pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008783921s Mar 25 17:53:22.306: INFO: Pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290533721s Mar 25 17:53:24.311: INFO: Pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.296041501s STEP: Saw pod success Mar 25 17:53:24.311: INFO: Pod "pod-664141f3-4cca-49eb-94f6-082814e3cc06" satisfied condition "Succeeded or Failed" Mar 25 17:53:24.314: INFO: Trying to get logs from node latest-worker2 pod pod-664141f3-4cca-49eb-94f6-082814e3cc06 container test-container: STEP: delete the pod Mar 25 17:53:24.389: INFO: Waiting for pod pod-664141f3-4cca-49eb-94f6-082814e3cc06 to disappear Mar 25 17:53:24.395: INFO: Pod pod-664141f3-4cca-49eb-94f6-082814e3cc06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:53:24.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7388" for this suite. • [SLOW TEST:6.495 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":115,"completed":41,"skipped":3043,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:53:24.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim with storage class STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:53:35.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-267" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":115,"completed":42,"skipped":3062,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:53:35.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-5862 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:53:35.733: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-attacher Mar 25 17:53:35.737: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5862 Mar 25 17:53:35.737: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5862 Mar 25 17:53:35.751: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5862 Mar 25 17:53:35.772: INFO: creating *v1.Role: csi-mock-volumes-5862-1008/external-attacher-cfg-csi-mock-volumes-5862 Mar 25 17:53:35.787: INFO: creating *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-attacher-role-cfg Mar 25 17:53:35.836: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-provisioner Mar 25 17:53:35.839: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5862 Mar 25 17:53:35.839: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5862 Mar 25 17:53:35.847: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5862 Mar 25 17:53:35.869: INFO: creating *v1.Role: csi-mock-volumes-5862-1008/external-provisioner-cfg-csi-mock-volumes-5862 Mar 25 17:53:35.872: INFO: creating *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-provisioner-role-cfg Mar 25 17:53:35.889: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-resizer Mar 25 17:53:35.901: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5862 Mar 25 17:53:35.901: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5862 Mar 25 17:53:35.919: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5862 Mar 25 17:53:35.931: INFO: creating *v1.Role: csi-mock-volumes-5862-1008/external-resizer-cfg-csi-mock-volumes-5862 Mar 25 17:53:35.950: INFO: creating *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-resizer-role-cfg Mar 25 17:53:35.970: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-snapshotter Mar 25 17:53:35.985: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5862 Mar 25 17:53:35.985: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5862 Mar 25 17:53:35.991: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5862 Mar 25 17:53:36.012: INFO: creating *v1.Role: csi-mock-volumes-5862-1008/external-snapshotter-leaderelection-csi-mock-volumes-5862 Mar 25 17:53:36.025: INFO: creating *v1.RoleBinding: csi-mock-volumes-5862-1008/external-snapshotter-leaderelection Mar 25 17:53:36.042: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-mock Mar 25 17:53:36.070: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5862 Mar 25 17:53:36.079: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5862 Mar 25 17:53:36.099: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5862 Mar 25 17:53:36.129: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5862 Mar 25 17:53:36.145: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5862 Mar 25 17:53:36.151: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5862 Mar 25 17:53:36.156: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5862 Mar 25 17:53:36.195: INFO: creating *v1.StatefulSet: csi-mock-volumes-5862-1008/csi-mockplugin Mar 25 17:53:36.224: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5862 Mar 25 17:53:36.250: INFO: creating *v1.StatefulSet: csi-mock-volumes-5862-1008/csi-mockplugin-resizer Mar 25 17:53:36.288: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5862" Mar 25 17:53:36.328: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5862 to register on node latest-worker STEP: Creating pod Mar 25 17:53:46.111: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:53:46.138: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-b57jt] to have phase Bound Mar 25 17:53:46.141: INFO: PersistentVolumeClaim pvc-b57jt found but phase is Pending instead of Bound. Mar 25 17:53:48.146: INFO: PersistentVolumeClaim pvc-b57jt found and phase=Bound (2.008060588s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 25 17:53:54.263: INFO: Deleting pod "pvc-volume-tester-txqwm" in namespace "csi-mock-volumes-5862" Mar 25 17:53:54.268: INFO: Wait up to 5m0s for pod "pvc-volume-tester-txqwm" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-txqwm Mar 25 17:54:06.330: INFO: Deleting pod "pvc-volume-tester-txqwm" in namespace "csi-mock-volumes-5862" STEP: Deleting pod pvc-volume-tester-rjr92 Mar 25 17:54:06.345: INFO: Deleting pod "pvc-volume-tester-rjr92" in namespace "csi-mock-volumes-5862" Mar 25 17:54:06.405: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rjr92" to be fully deleted STEP: Deleting claim pvc-b57jt Mar 25 17:54:16.452: INFO: Waiting up to 2m0s for PersistentVolume pvc-5367c713-2e21-4ba1-9476-f86810557d5b to get deleted Mar 25 17:54:16.457: INFO: PersistentVolume pvc-5367c713-2e21-4ba1-9476-f86810557d5b found and phase=Bound (5.480344ms) Mar 25 17:54:18.463: INFO: PersistentVolume pvc-5367c713-2e21-4ba1-9476-f86810557d5b was removed STEP: Deleting storageclass csi-mock-volumes-5862-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5862 STEP: Waiting for namespaces [csi-mock-volumes-5862] to vanish STEP: uninstalling csi mock driver Mar 25 17:54:24.509: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-attacher Mar 25 17:54:24.515: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5862 Mar 25 17:54:24.526: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5862 Mar 25 17:54:24.531: INFO: deleting *v1.Role: csi-mock-volumes-5862-1008/external-attacher-cfg-csi-mock-volumes-5862 Mar 25 17:54:24.537: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-attacher-role-cfg Mar 25 17:54:24.543: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-provisioner Mar 25 17:54:24.549: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5862 Mar 25 17:54:24.597: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5862 Mar 25 17:54:24.603: INFO: deleting *v1.Role: csi-mock-volumes-5862-1008/external-provisioner-cfg-csi-mock-volumes-5862 Mar 25 17:54:24.612: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-provisioner-role-cfg Mar 25 17:54:24.617: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-resizer Mar 25 17:54:24.622: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5862 Mar 25 17:54:24.629: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5862 Mar 25 17:54:24.671: INFO: deleting *v1.Role: csi-mock-volumes-5862-1008/external-resizer-cfg-csi-mock-volumes-5862 Mar 25 17:54:24.684: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5862-1008/csi-resizer-role-cfg Mar 25 17:54:24.695: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-snapshotter Mar 25 17:54:24.743: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5862 Mar 25 17:54:24.766: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5862 Mar 25 17:54:24.777: INFO: deleting *v1.Role: csi-mock-volumes-5862-1008/external-snapshotter-leaderelection-csi-mock-volumes-5862 Mar 25 17:54:24.785: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5862-1008/external-snapshotter-leaderelection Mar 25 17:54:24.791: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5862-1008/csi-mock Mar 25 17:54:24.797: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5862 Mar 25 17:54:24.802: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5862 Mar 25 17:54:24.833: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5862 Mar 25 17:54:24.874: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5862 Mar 25 17:54:24.881: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5862 Mar 25 17:54:24.887: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5862 Mar 25 17:54:24.892: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5862 Mar 25 17:54:24.898: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5862-1008/csi-mockplugin Mar 25 17:54:24.924: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5862 Mar 25 17:54:24.941: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5862-1008/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-5862-1008 STEP: Waiting for namespaces [csi-mock-volumes-5862-1008] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:55:08.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.430 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":115,"completed":43,"skipped":3065,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:55:08.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 17:55:09.035: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 17:55:09.048: INFO: Default storage class: "standard" Mar 25 17:55:09.048: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 17:55:19.084: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionq42zm] to have phase Bound Mar 25 17:55:19.088: INFO: PersistentVolumeClaim pvc-protectionq42zm found and phase=Bound (3.542065ms) STEP: Checking that PVC Protection finalizer is set [It] Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 STEP: Deleting the pod using the PVC Mar 25 17:55:19.091: INFO: Deleting pod "pvc-tester-7mfpj" in namespace "pvc-protection-2470" Mar 25 17:55:19.097: INFO: Wait up to 5m0s for pod "pvc-tester-7mfpj" to be fully deleted STEP: Deleting the PVC Mar 25 17:55:27.116: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionq42zm to be removed Mar 25 17:55:29.147: INFO: Claim "pvc-protectionq42zm" in namespace "pvc-protection-2470" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:55:29.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-2470" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:20.182 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":115,"completed":44,"skipped":3075,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:55:29.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1594 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 17:55:29.775: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-attacher Mar 25 17:55:29.789: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1594 Mar 25 17:55:29.789: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1594 Mar 25 17:55:29.792: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1594 Mar 25 17:55:29.797: INFO: creating *v1.Role: csi-mock-volumes-1594-4453/external-attacher-cfg-csi-mock-volumes-1594 Mar 25 17:55:29.803: INFO: creating *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-attacher-role-cfg Mar 25 17:55:29.835: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-provisioner Mar 25 17:55:29.851: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1594 Mar 25 17:55:29.851: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1594 Mar 25 17:55:29.869: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1594 Mar 25 17:55:29.875: INFO: creating *v1.Role: csi-mock-volumes-1594-4453/external-provisioner-cfg-csi-mock-volumes-1594 Mar 25 17:55:29.881: INFO: creating *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-provisioner-role-cfg Mar 25 17:55:29.921: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-resizer Mar 25 17:55:29.925: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1594 Mar 25 17:55:29.925: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1594 Mar 25 17:55:29.935: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1594 Mar 25 17:55:29.941: INFO: creating *v1.Role: csi-mock-volumes-1594-4453/external-resizer-cfg-csi-mock-volumes-1594 Mar 25 17:55:29.979: INFO: creating *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-resizer-role-cfg Mar 25 17:55:30.019: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-snapshotter Mar 25 17:55:30.125: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1594 Mar 25 17:55:30.125: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1594 Mar 25 17:55:30.129: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1594 Mar 25 17:55:30.163: INFO: creating *v1.Role: csi-mock-volumes-1594-4453/external-snapshotter-leaderelection-csi-mock-volumes-1594 Mar 25 17:55:30.389: INFO: creating *v1.RoleBinding: csi-mock-volumes-1594-4453/external-snapshotter-leaderelection Mar 25 17:55:30.394: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-mock Mar 25 17:55:30.433: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1594 Mar 25 17:55:30.450: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1594 Mar 25 17:55:30.456: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1594 Mar 25 17:55:30.462: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1594 Mar 25 17:55:30.538: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1594 Mar 25 17:55:30.542: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1594 Mar 25 17:55:30.565: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1594 Mar 25 17:55:30.588: INFO: creating *v1.StatefulSet: csi-mock-volumes-1594-4453/csi-mockplugin Mar 25 17:55:30.600: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1594 Mar 25 17:55:30.625: INFO: creating *v1.StatefulSet: csi-mock-volumes-1594-4453/csi-mockplugin-attacher Mar 25 17:55:30.707: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1594" Mar 25 17:55:30.739: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1594 to register on node latest-worker2 STEP: Creating pod Mar 25 17:55:40.307: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 17:55:40.355: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-xvn2p] to have phase Bound Mar 25 17:55:40.361: INFO: PersistentVolumeClaim pvc-xvn2p found but phase is Pending instead of Bound. Mar 25 17:55:42.366: INFO: PersistentVolumeClaim pvc-xvn2p found and phase=Bound (2.010401583s) STEP: Deleting the previously created pod Mar 25 17:55:54.390: INFO: Deleting pod "pvc-volume-tester-2trlq" in namespace "csi-mock-volumes-1594" Mar 25 17:55:54.395: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2trlq" to be fully deleted STEP: Checking CSI driver logs Mar 25 17:56:16.439: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/caebb9df-1366-42e9-a16b-bb90321be996/volumes/kubernetes.io~csi/pvc-2fbb8c2b-e801-4fc3-9258-dde77e0c26cd/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-2trlq Mar 25 17:56:16.439: INFO: Deleting pod "pvc-volume-tester-2trlq" in namespace "csi-mock-volumes-1594" STEP: Deleting claim pvc-xvn2p Mar 25 17:56:16.450: INFO: Waiting up to 2m0s for PersistentVolume pvc-2fbb8c2b-e801-4fc3-9258-dde77e0c26cd to get deleted Mar 25 17:56:16.457: INFO: PersistentVolume pvc-2fbb8c2b-e801-4fc3-9258-dde77e0c26cd found and phase=Bound (7.479189ms) Mar 25 17:56:18.462: INFO: PersistentVolume pvc-2fbb8c2b-e801-4fc3-9258-dde77e0c26cd was removed STEP: Deleting storageclass csi-mock-volumes-1594-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1594 STEP: Waiting for namespaces [csi-mock-volumes-1594] to vanish STEP: uninstalling csi mock driver Mar 25 17:56:24.490: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-attacher Mar 25 17:56:24.497: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1594 Mar 25 17:56:24.562: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1594 Mar 25 17:56:24.570: INFO: deleting *v1.Role: csi-mock-volumes-1594-4453/external-attacher-cfg-csi-mock-volumes-1594 Mar 25 17:56:24.576: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-attacher-role-cfg Mar 25 17:56:24.593: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-provisioner Mar 25 17:56:24.599: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1594 Mar 25 17:56:24.610: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1594 Mar 25 17:56:24.617: INFO: deleting *v1.Role: csi-mock-volumes-1594-4453/external-provisioner-cfg-csi-mock-volumes-1594 Mar 25 17:56:24.623: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-provisioner-role-cfg Mar 25 17:56:24.640: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-resizer Mar 25 17:56:24.654: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1594 Mar 25 17:56:24.682: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1594 Mar 25 17:56:24.688: INFO: deleting *v1.Role: csi-mock-volumes-1594-4453/external-resizer-cfg-csi-mock-volumes-1594 Mar 25 17:56:24.702: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1594-4453/csi-resizer-role-cfg Mar 25 17:56:24.708: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-snapshotter Mar 25 17:56:24.732: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1594 Mar 25 17:56:24.738: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1594 Mar 25 17:56:24.755: INFO: deleting *v1.Role: csi-mock-volumes-1594-4453/external-snapshotter-leaderelection-csi-mock-volumes-1594 Mar 25 17:56:24.774: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1594-4453/external-snapshotter-leaderelection Mar 25 17:56:24.804: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1594-4453/csi-mock Mar 25 17:56:24.815: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1594 Mar 25 17:56:24.822: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1594 Mar 25 17:56:24.832: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1594 Mar 25 17:56:24.839: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1594 Mar 25 17:56:24.845: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1594 Mar 25 17:56:24.851: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1594 Mar 25 17:56:24.857: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1594 Mar 25 17:56:24.871: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1594-4453/csi-mockplugin Mar 25 17:56:24.932: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1594 Mar 25 17:56:24.942: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1594-4453/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1594-4453 STEP: Waiting for namespaces [csi-mock-volumes-1594-4453] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:57:42.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:133.806 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":115,"completed":45,"skipped":3097,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:57:42.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:57:45.148: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e0028007-636d-44a9-bcbb-8f924a467d4d-backend && ln -s /tmp/local-volume-test-e0028007-636d-44a9-bcbb-8f924a467d4d-backend /tmp/local-volume-test-e0028007-636d-44a9-bcbb-8f924a467d4d] Namespace:persistent-local-volumes-test-6003 PodName:hostexec-latest-worker-xjp8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:57:45.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:57:45.262: INFO: Creating a PV followed by a PVC Mar 25 17:57:45.342: INFO: Waiting for PV local-pvcwwq2 to bind to PVC pvc-7tlm2 Mar 25 17:57:45.342: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7tlm2] to have phase Bound Mar 25 17:57:45.345: INFO: PersistentVolumeClaim pvc-7tlm2 found but phase is Pending instead of Bound. Mar 25 17:57:47.351: INFO: PersistentVolumeClaim pvc-7tlm2 found but phase is Pending instead of Bound. Mar 25 17:57:49.356: INFO: PersistentVolumeClaim pvc-7tlm2 found and phase=Bound (4.013897324s) Mar 25 17:57:49.356: INFO: Waiting up to 3m0s for PersistentVolume local-pvcwwq2 to have phase Bound Mar 25 17:57:49.360: INFO: PersistentVolume local-pvcwwq2 found and phase=Bound (3.60644ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:57:53.388: INFO: pod "pod-84f852c3-42ab-4abd-8d0c-59b25c568774" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:57:53.388: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6003 PodName:pod-84f852c3-42ab-4abd-8d0c-59b25c568774 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:57:53.388: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:57:53.518: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 17:57:53.518: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6003 PodName:pod-84f852c3-42ab-4abd-8d0c-59b25c568774 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:57:53.518: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:57:53.598: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-84f852c3-42ab-4abd-8d0c-59b25c568774 in namespace persistent-local-volumes-test-6003 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:57:53.603: INFO: Deleting PersistentVolumeClaim "pvc-7tlm2" Mar 25 17:57:53.679: INFO: Deleting PersistentVolume "local-pvcwwq2" STEP: Removing the test directory Mar 25 17:57:53.700: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e0028007-636d-44a9-bcbb-8f924a467d4d && rm -r /tmp/local-volume-test-e0028007-636d-44a9-bcbb-8f924a467d4d-backend] Namespace:persistent-local-volumes-test-6003 PodName:hostexec-latest-worker-xjp8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:57:53.700: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:57:53.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6003" for this suite. • [SLOW TEST:10.868 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":46,"skipped":3121,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:57:53.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Mar 25 17:57:53.946: INFO: Waiting up to 5m0s for pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe" in namespace "projected-2976" to be "Succeeded or Failed" Mar 25 17:57:53.953: INFO: Pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.940085ms Mar 25 17:57:55.957: INFO: Pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010498382s Mar 25 17:57:57.965: INFO: Pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018245599s Mar 25 17:57:59.970: INFO: Pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023503391s STEP: Saw pod success Mar 25 17:57:59.970: INFO: Pod "metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe" satisfied condition "Succeeded or Failed" Mar 25 17:57:59.973: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe container client-container: STEP: delete the pod Mar 25 17:58:00.028: INFO: Waiting for pod metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe to disappear Mar 25 17:58:00.041: INFO: Pod metadata-volume-6f73d211-3318-4d6f-8278-0a64e5205afe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:58:00.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2976" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":47,"skipped":3130,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:58:00.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17" Mar 25 17:58:04.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17" "/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17"] Namespace:persistent-local-volumes-test-1199 PodName:hostexec-latest-worker-jtt56 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:58:04.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:58:04.267: INFO: Creating a PV followed by a PVC Mar 25 17:58:04.280: INFO: Waiting for PV local-pvc76kp to bind to PVC pvc-k5m9f Mar 25 17:58:04.280: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-k5m9f] to have phase Bound Mar 25 17:58:04.286: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:06.291: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:08.295: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:10.300: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:12.305: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:14.310: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:16.315: INFO: PersistentVolumeClaim pvc-k5m9f found but phase is Pending instead of Bound. Mar 25 17:58:18.320: INFO: PersistentVolumeClaim pvc-k5m9f found and phase=Bound (14.040128295s) Mar 25 17:58:18.320: INFO: Waiting up to 3m0s for PersistentVolume local-pvc76kp to have phase Bound Mar 25 17:58:18.324: INFO: PersistentVolume local-pvc76kp found and phase=Bound (3.27112ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:58:22.402: INFO: pod "pod-b4edfdd4-bedf-493b-ac57-98e77af17ac8" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:58:22.402: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1199 PodName:pod-b4edfdd4-bedf-493b-ac57-98e77af17ac8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:22.402: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:22.499: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 17:58:22.499: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1199 PodName:pod-b4edfdd4-bedf-493b-ac57-98e77af17ac8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:22.499: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:22.615: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 17:58:22.615: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1199 PodName:pod-b4edfdd4-bedf-493b-ac57-98e77af17ac8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:22.615: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:22.723: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-b4edfdd4-bedf-493b-ac57-98e77af17ac8 in namespace persistent-local-volumes-test-1199 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:58:22.730: INFO: Deleting PersistentVolumeClaim "pvc-k5m9f" Mar 25 17:58:22.752: INFO: Deleting PersistentVolume "local-pvc76kp" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17" Mar 25 17:58:22.793: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17"] Namespace:persistent-local-volumes-test-1199 PodName:hostexec-latest-worker-jtt56 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:58:22.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 17:58:22.944: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9a51ba81-f4e3-47eb-bbae-4646bcabdd17] Namespace:persistent-local-volumes-test-1199 PodName:hostexec-latest-worker-jtt56 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:58:22.944: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:58:23.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1199" for this suite. • [SLOW TEST:23.014 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":48,"skipped":3165,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volume limits should verify that all nodes have volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:58:23.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Mar 25 17:58:23.193: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:58:23.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-4388" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.127 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:58:23.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 17:58:27.368: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4 && mount --bind /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4 /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4] Namespace:persistent-local-volumes-test-9818 PodName:hostexec-latest-worker-rqp94 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:58:27.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 17:58:27.491: INFO: Creating a PV followed by a PVC Mar 25 17:58:27.503: INFO: Waiting for PV local-pvvkwgn to bind to PVC pvc-j79mf Mar 25 17:58:27.503: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-j79mf] to have phase Bound Mar 25 17:58:27.523: INFO: PersistentVolumeClaim pvc-j79mf found but phase is Pending instead of Bound. Mar 25 17:58:29.528: INFO: PersistentVolumeClaim pvc-j79mf found but phase is Pending instead of Bound. Mar 25 17:58:31.533: INFO: PersistentVolumeClaim pvc-j79mf found but phase is Pending instead of Bound. Mar 25 17:58:33.538: INFO: PersistentVolumeClaim pvc-j79mf found and phase=Bound (6.035203368s) Mar 25 17:58:33.538: INFO: Waiting up to 3m0s for PersistentVolume local-pvvkwgn to have phase Bound Mar 25 17:58:33.541: INFO: PersistentVolume local-pvvkwgn found and phase=Bound (3.348153ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 17:58:37.595: INFO: pod "pod-e4c00ccb-517b-49d6-8e50-cc1ede1c1b9b" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 17:58:37.595: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9818 PodName:pod-e4c00ccb-517b-49d6-8e50-cc1ede1c1b9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:37.595: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:37.743: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 17:58:37.743: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9818 PodName:pod-e4c00ccb-517b-49d6-8e50-cc1ede1c1b9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:37.743: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:37.841: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 17:58:37.841: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9818 PodName:pod-e4c00ccb-517b-49d6-8e50-cc1ede1c1b9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:58:37.841: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:58:37.956: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-e4c00ccb-517b-49d6-8e50-cc1ede1c1b9b in namespace persistent-local-volumes-test-9818 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 17:58:37.962: INFO: Deleting PersistentVolumeClaim "pvc-j79mf" Mar 25 17:58:38.003: INFO: Deleting PersistentVolume "local-pvvkwgn" STEP: Removing the test directory Mar 25 17:58:38.050: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4 && rm -r /tmp/local-volume-test-c8d9c0ef-d09f-4ffb-813c-6a1f7b9d45f4] Namespace:persistent-local-volumes-test-9818 PodName:hostexec-latest-worker-rqp94 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:58:38.050: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:58:38.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9818" for this suite. • [SLOW TEST:15.015 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":49,"skipped":3246,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:58:38.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:58:49.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4033" for this suite. • [SLOW TEST:11.237 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":115,"completed":50,"skipped":3343,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:58:49.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-5088 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 17:58:49.715: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-attacher Mar 25 17:58:49.719: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5088 Mar 25 17:58:49.719: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5088 Mar 25 17:58:49.756: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5088 Mar 25 17:58:49.767: INFO: creating *v1.Role: csi-mock-volumes-5088-3276/external-attacher-cfg-csi-mock-volumes-5088 Mar 25 17:58:49.827: INFO: creating *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-attacher-role-cfg Mar 25 17:58:49.836: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-provisioner Mar 25 17:58:49.851: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5088 Mar 25 17:58:49.851: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5088 Mar 25 17:58:49.857: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5088 Mar 25 17:58:49.863: INFO: creating *v1.Role: csi-mock-volumes-5088-3276/external-provisioner-cfg-csi-mock-volumes-5088 Mar 25 17:58:49.869: INFO: creating *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-provisioner-role-cfg Mar 25 17:58:49.890: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-resizer Mar 25 17:58:49.920: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5088 Mar 25 17:58:49.920: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5088 Mar 25 17:58:49.947: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5088 Mar 25 17:58:49.951: INFO: creating *v1.Role: csi-mock-volumes-5088-3276/external-resizer-cfg-csi-mock-volumes-5088 Mar 25 17:58:49.954: INFO: creating *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-resizer-role-cfg Mar 25 17:58:49.960: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-snapshotter Mar 25 17:58:49.993: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5088 Mar 25 17:58:49.993: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5088 Mar 25 17:58:50.013: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5088 Mar 25 17:58:50.019: INFO: creating *v1.Role: csi-mock-volumes-5088-3276/external-snapshotter-leaderelection-csi-mock-volumes-5088 Mar 25 17:58:50.035: INFO: creating *v1.RoleBinding: csi-mock-volumes-5088-3276/external-snapshotter-leaderelection Mar 25 17:58:50.079: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-mock Mar 25 17:58:50.086: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5088 Mar 25 17:58:50.090: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5088 Mar 25 17:58:50.097: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5088 Mar 25 17:58:50.154: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5088 Mar 25 17:58:50.253: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5088 Mar 25 17:58:50.257: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5088 Mar 25 17:58:50.272: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5088 Mar 25 17:58:50.284: INFO: creating *v1.StatefulSet: csi-mock-volumes-5088-3276/csi-mockplugin Mar 25 17:58:50.309: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5088 Mar 25 17:58:50.396: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5088" Mar 25 17:58:50.399: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5088 to register on node latest-worker I0325 17:58:59.201773 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5088","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 17:58:59.297493 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 17:58:59.299761 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5088","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 17:58:59.346784 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0325 17:58:59.390891 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 17:58:59.453411 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5088","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Mar 25 17:58:59.967: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0325 17:59:00.029924 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 17:59:01.052979 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0325 17:59:02.230387 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 17:59:02.233: INFO: >>> kubeConfig: /root/.kube/config I0325 17:59:02.382883 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6","storage.kubernetes.io/csiProvisionerIdentity":"1616695139434-8081-csi-mock-csi-mock-volumes-5088"}},"Response":{},"Error":"","FullError":null} I0325 17:59:02.391253 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 17:59:02.393: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:59:02.495: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:59:02.592: INFO: >>> kubeConfig: /root/.kube/config I0325 17:59:02.689774 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6/globalmount","target_path":"/var/lib/kubelet/pods/9cb04e41-2d56-45d5-8c55-6eddf429a6ac/volumes/kubernetes.io~csi/pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6","storage.kubernetes.io/csiProvisionerIdentity":"1616695139434-8081-csi-mock-csi-mock-volumes-5088"}},"Response":{},"Error":"","FullError":null} Mar 25 17:59:08.020: INFO: Deleting pod "pvc-volume-tester-524fk" in namespace "csi-mock-volumes-5088" Mar 25 17:59:08.028: INFO: Wait up to 5m0s for pod "pvc-volume-tester-524fk" to be fully deleted Mar 25 17:59:10.255: INFO: >>> kubeConfig: /root/.kube/config I0325 17:59:10.383022 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9cb04e41-2d56-45d5-8c55-6eddf429a6ac/volumes/kubernetes.io~csi/pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6/mount"},"Response":{},"Error":"","FullError":null} I0325 17:59:10.459766 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 17:59:10.461804 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6/globalmount"},"Response":{},"Error":"","FullError":null} I0325 17:59:56.101834 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 17:59:57.054: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282509", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00148a1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00148a1f8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001aa0a90), VolumeMode:(*v1.PersistentVolumeMode)(0xc001aa0aa0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282512", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c3aa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c3ab8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c3ad0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c3ae8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0021172d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0021172e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282513", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97db8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97dd0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97de8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97e00)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97e18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97e30)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000fc6570), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc6590), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282517", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97e48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97e60)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97e78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97e90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97ea8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97ec0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000fc65d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc65e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282524", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97ef0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97f08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97f20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97f38)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97f50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97f68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000fc6640), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc6650), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.055: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282530", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97f98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e97fb0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e97fe0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420000)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420030)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6", StorageClassName:(*string)(0xc000fc6680), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc6700), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.055: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282531", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420078)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034200a8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034200c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034200d8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6", StorageClassName:(*string)(0xc000fc6730), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc6740), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.055: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282704", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003420108), DeletionGracePeriodSeconds:(*int64)(0xc0031b31d8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420138)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420150), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420168)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003420180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003420198)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6", StorageClassName:(*string)(0xc000fc6780), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fc6790), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 17:59:57.055: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-cphww", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5088", SelfLink:"", UID:"972bf8d7-c629-489c-9aa1-fa76eeef2da6", ResourceVersion:"1282705", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752291939, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc000f60f48), DeletionGracePeriodSeconds:(*int64)(0xc002d2abb8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5088", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000f60f60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f60f78)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000f60f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f60fa8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000f60fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f60fd8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-972bf8d7-c629-489c-9aa1-fa76eeef2da6", StorageClassName:(*string)(0xc002116820), VolumeMode:(*v1.PersistentVolumeMode)(0xc002116830), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-524fk Mar 25 17:59:57.055: INFO: Deleting pod "pvc-volume-tester-524fk" in namespace "csi-mock-volumes-5088" STEP: Deleting claim pvc-cphww STEP: Deleting storageclass csi-mock-volumes-5088-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5088 STEP: Waiting for namespaces [csi-mock-volumes-5088] to vanish STEP: uninstalling csi mock driver Mar 25 18:00:03.096: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-attacher Mar 25 18:00:03.102: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5088 Mar 25 18:00:03.106: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5088 Mar 25 18:00:03.136: INFO: deleting *v1.Role: csi-mock-volumes-5088-3276/external-attacher-cfg-csi-mock-volumes-5088 Mar 25 18:00:03.143: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-attacher-role-cfg Mar 25 18:00:03.154: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-provisioner Mar 25 18:00:03.177: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5088 Mar 25 18:00:03.201: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5088 Mar 25 18:00:03.213: INFO: deleting *v1.Role: csi-mock-volumes-5088-3276/external-provisioner-cfg-csi-mock-volumes-5088 Mar 25 18:00:03.220: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-provisioner-role-cfg Mar 25 18:00:03.243: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-resizer Mar 25 18:00:03.257: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5088 Mar 25 18:00:03.266: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5088 Mar 25 18:00:03.274: INFO: deleting *v1.Role: csi-mock-volumes-5088-3276/external-resizer-cfg-csi-mock-volumes-5088 Mar 25 18:00:03.303: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5088-3276/csi-resizer-role-cfg Mar 25 18:00:03.316: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-snapshotter Mar 25 18:00:03.322: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5088 Mar 25 18:00:03.327: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5088 Mar 25 18:00:03.339: INFO: deleting *v1.Role: csi-mock-volumes-5088-3276/external-snapshotter-leaderelection-csi-mock-volumes-5088 Mar 25 18:00:03.363: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5088-3276/external-snapshotter-leaderelection Mar 25 18:00:03.376: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5088-3276/csi-mock Mar 25 18:00:03.382: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5088 Mar 25 18:00:03.387: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5088 Mar 25 18:00:03.399: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5088 Mar 25 18:00:03.422: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5088 Mar 25 18:00:03.430: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5088 Mar 25 18:00:03.436: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5088 Mar 25 18:00:03.441: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5088 Mar 25 18:00:03.447: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5088-3276/csi-mockplugin Mar 25 18:00:03.489: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5088 STEP: deleting the driver namespace: csi-mock-volumes-5088-3276 STEP: Waiting for namespaces [csi-mock-volumes-5088-3276] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:00:47.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:118.065 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":115,"completed":51,"skipped":3351,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:00:47.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-2669 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:00:47.697: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-attacher Mar 25 18:00:47.701: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2669 Mar 25 18:00:47.701: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2669 Mar 25 18:00:47.706: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2669 Mar 25 18:00:47.711: INFO: creating *v1.Role: csi-mock-volumes-2669-6557/external-attacher-cfg-csi-mock-volumes-2669 Mar 25 18:00:47.730: INFO: creating *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-attacher-role-cfg Mar 25 18:00:47.741: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-provisioner Mar 25 18:00:47.747: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2669 Mar 25 18:00:47.747: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2669 Mar 25 18:00:47.754: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2669 Mar 25 18:00:47.772: INFO: creating *v1.Role: csi-mock-volumes-2669-6557/external-provisioner-cfg-csi-mock-volumes-2669 Mar 25 18:00:47.823: INFO: creating *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-provisioner-role-cfg Mar 25 18:00:47.838: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-resizer Mar 25 18:00:47.849: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2669 Mar 25 18:00:47.849: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2669 Mar 25 18:00:47.855: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2669 Mar 25 18:00:47.862: INFO: creating *v1.Role: csi-mock-volumes-2669-6557/external-resizer-cfg-csi-mock-volumes-2669 Mar 25 18:00:47.867: INFO: creating *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-resizer-role-cfg Mar 25 18:00:47.916: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-snapshotter Mar 25 18:00:47.978: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2669 Mar 25 18:00:47.978: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2669 Mar 25 18:00:47.994: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2669 Mar 25 18:00:48.028: INFO: creating *v1.Role: csi-mock-volumes-2669-6557/external-snapshotter-leaderelection-csi-mock-volumes-2669 Mar 25 18:00:48.048: INFO: creating *v1.RoleBinding: csi-mock-volumes-2669-6557/external-snapshotter-leaderelection Mar 25 18:00:48.059: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-mock Mar 25 18:00:48.065: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2669 Mar 25 18:00:48.115: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2669 Mar 25 18:00:48.120: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2669 Mar 25 18:00:48.125: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2669 Mar 25 18:00:48.131: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2669 Mar 25 18:00:48.149: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2669 Mar 25 18:00:48.161: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2669 Mar 25 18:00:48.179: INFO: creating *v1.StatefulSet: csi-mock-volumes-2669-6557/csi-mockplugin Mar 25 18:00:48.196: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2669 Mar 25 18:00:48.289: INFO: creating *v1.StatefulSet: csi-mock-volumes-2669-6557/csi-mockplugin-attacher Mar 25 18:00:48.295: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2669" Mar 25 18:00:48.298: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2669 to register on node latest-worker STEP: Creating pod Mar 25 18:00:57.961: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:00:57.971: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-vntjf] to have phase Bound Mar 25 18:00:57.981: INFO: PersistentVolumeClaim pvc-vntjf found but phase is Pending instead of Bound. Mar 25 18:00:59.987: INFO: PersistentVolumeClaim pvc-vntjf found and phase=Bound (2.015139685s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-rq2qq Mar 25 18:01:20.033: INFO: Deleting pod "pvc-volume-tester-rq2qq" in namespace "csi-mock-volumes-2669" Mar 25 18:01:20.037: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rq2qq" to be fully deleted STEP: Deleting claim pvc-vntjf Mar 25 18:01:26.052: INFO: Waiting up to 2m0s for PersistentVolume pvc-dde61635-b4cf-4051-b3b8-1c3db223cc69 to get deleted Mar 25 18:01:26.067: INFO: PersistentVolume pvc-dde61635-b4cf-4051-b3b8-1c3db223cc69 found and phase=Bound (14.728347ms) Mar 25 18:01:28.072: INFO: PersistentVolume pvc-dde61635-b4cf-4051-b3b8-1c3db223cc69 found and phase=Released (2.019052385s) Mar 25 18:01:30.075: INFO: PersistentVolume pvc-dde61635-b4cf-4051-b3b8-1c3db223cc69 was removed STEP: Deleting storageclass csi-mock-volumes-2669-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2669 STEP: Waiting for namespaces [csi-mock-volumes-2669] to vanish STEP: uninstalling csi mock driver Mar 25 18:01:36.097: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-attacher Mar 25 18:01:36.108: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2669 Mar 25 18:01:36.119: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2669 Mar 25 18:01:36.128: INFO: deleting *v1.Role: csi-mock-volumes-2669-6557/external-attacher-cfg-csi-mock-volumes-2669 Mar 25 18:01:36.197: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-attacher-role-cfg Mar 25 18:01:36.207: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-provisioner Mar 25 18:01:36.230: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2669 Mar 25 18:01:36.235: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2669 Mar 25 18:01:36.246: INFO: deleting *v1.Role: csi-mock-volumes-2669-6557/external-provisioner-cfg-csi-mock-volumes-2669 Mar 25 18:01:36.253: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-provisioner-role-cfg Mar 25 18:01:36.259: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-resizer Mar 25 18:01:36.264: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2669 Mar 25 18:01:36.275: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2669 Mar 25 18:01:36.337: INFO: deleting *v1.Role: csi-mock-volumes-2669-6557/external-resizer-cfg-csi-mock-volumes-2669 Mar 25 18:01:36.344: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2669-6557/csi-resizer-role-cfg Mar 25 18:01:36.349: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-snapshotter Mar 25 18:01:36.355: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2669 Mar 25 18:01:36.365: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2669 Mar 25 18:01:36.373: INFO: deleting *v1.Role: csi-mock-volumes-2669-6557/external-snapshotter-leaderelection-csi-mock-volumes-2669 Mar 25 18:01:36.379: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2669-6557/external-snapshotter-leaderelection Mar 25 18:01:36.384: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2669-6557/csi-mock Mar 25 18:01:36.391: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2669 Mar 25 18:01:36.401: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2669 Mar 25 18:01:36.421: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2669 Mar 25 18:01:36.453: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2669 Mar 25 18:01:36.458: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2669 Mar 25 18:01:36.463: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2669 Mar 25 18:01:36.469: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2669 Mar 25 18:01:36.475: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2669-6557/csi-mockplugin Mar 25 18:01:36.481: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2669 Mar 25 18:01:36.505: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2669-6557/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2669-6557 STEP: Waiting for namespaces [csi-mock-volumes-2669-6557] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:02:04.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:77.017 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":115,"completed":52,"skipped":3396,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:02:04.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:02:09.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend && mount --bind /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend && ln -s /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a] Namespace:persistent-local-volumes-test-9908 PodName:hostexec-latest-worker2-6nvft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:09.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:02:09.240: INFO: Creating a PV followed by a PVC Mar 25 18:02:09.267: INFO: Waiting for PV local-pvg72vc to bind to PVC pvc-74g6h Mar 25 18:02:09.267: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-74g6h] to have phase Bound Mar 25 18:02:09.282: INFO: PersistentVolumeClaim pvc-74g6h found but phase is Pending instead of Bound. Mar 25 18:02:11.288: INFO: PersistentVolumeClaim pvc-74g6h found but phase is Pending instead of Bound. Mar 25 18:02:13.293: INFO: PersistentVolumeClaim pvc-74g6h found but phase is Pending instead of Bound. Mar 25 18:02:15.298: INFO: PersistentVolumeClaim pvc-74g6h found but phase is Pending instead of Bound. Mar 25 18:02:17.303: INFO: PersistentVolumeClaim pvc-74g6h found but phase is Pending instead of Bound. Mar 25 18:02:19.307: INFO: PersistentVolumeClaim pvc-74g6h found and phase=Bound (10.039657082s) Mar 25 18:02:19.307: INFO: Waiting up to 3m0s for PersistentVolume local-pvg72vc to have phase Bound Mar 25 18:02:19.309: INFO: PersistentVolume local-pvg72vc found and phase=Bound (2.610585ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:02:23.360: INFO: pod "pod-055dcdad-fe1f-43bc-bf52-0a94e986576b" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:02:23.360: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9908 PodName:pod-055dcdad-fe1f-43bc-bf52-0a94e986576b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:23.360: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:23.490: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 18:02:23.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9908 PodName:pod-055dcdad-fe1f-43bc-bf52-0a94e986576b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:23.491: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:23.576: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-055dcdad-fe1f-43bc-bf52-0a94e986576b in namespace persistent-local-volumes-test-9908 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:02:23.583: INFO: Deleting PersistentVolumeClaim "pvc-74g6h" Mar 25 18:02:23.607: INFO: Deleting PersistentVolume "local-pvg72vc" STEP: Removing the test directory Mar 25 18:02:23.620: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a && umount /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend && rm -r /tmp/local-volume-test-211ce25b-f2a4-4e0b-928b-33a30da1897a-backend] Namespace:persistent-local-volumes-test-9908 PodName:hostexec-latest-worker2-6nvft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:23.620: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:02:23.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9908" for this suite. • [SLOW TEST:19.222 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":53,"skipped":3409,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:02:23.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065" Mar 25 18:02:27.911: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065 && dd if=/dev/zero of=/tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065/file] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:27.911: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:28.084: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:28.084: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:28.182: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065 && chmod o+rwx /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:28.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:02:28.590: INFO: Creating a PV followed by a PVC Mar 25 18:02:28.619: INFO: Waiting for PV local-pvzg76g to bind to PVC pvc-r9grz Mar 25 18:02:28.619: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-r9grz] to have phase Bound Mar 25 18:02:28.625: INFO: PersistentVolumeClaim pvc-r9grz found but phase is Pending instead of Bound. Mar 25 18:02:30.630: INFO: PersistentVolumeClaim pvc-r9grz found but phase is Pending instead of Bound. Mar 25 18:02:32.634: INFO: PersistentVolumeClaim pvc-r9grz found and phase=Bound (4.015322258s) Mar 25 18:02:32.635: INFO: Waiting up to 3m0s for PersistentVolume local-pvzg76g to have phase Bound Mar 25 18:02:32.642: INFO: PersistentVolume local-pvzg76g found and phase=Bound (6.884653ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 18:02:36.680: INFO: pod "pod-af18cdee-e56d-4496-b279-90927383ef48" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:02:36.680: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-af18cdee-e56d-4496-b279-90927383ef48 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:36.680: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:36.804: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:02:36.804: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-af18cdee-e56d-4496-b279-90927383ef48 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:36.804: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:36.910: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 18:02:40.981: INFO: pod "pod-e95b2aa2-ef5e-44fe-87ce-013ea479adca" created on Node "latest-worker" Mar 25 18:02:40.981: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-e95b2aa2-ef5e-44fe-87ce-013ea479adca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:40.981: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:41.102: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 18:02:41.103: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-e95b2aa2-ef5e-44fe-87ce-013ea479adca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:41.103: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:41.199: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 18:02:41.199: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-af18cdee-e56d-4496-b279-90927383ef48 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:02:41.199: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:41.308: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-af18cdee-e56d-4496-b279-90927383ef48 in namespace persistent-local-volumes-test-9255 STEP: Deleting pod2 STEP: Deleting pod pod-e95b2aa2-ef5e-44fe-87ce-013ea479adca in namespace persistent-local-volumes-test-9255 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:02:41.350: INFO: Deleting PersistentVolumeClaim "pvc-r9grz" Mar 25 18:02:41.365: INFO: Deleting PersistentVolume "local-pvzg76g" Mar 25 18:02:41.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:41.375: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:02:41.532: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:41.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065/file Mar 25 18:02:41.632: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:41.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065 Mar 25 18:02:41.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f9dceac2-5f35-460b-abf0-e94c3d8d6065] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-latest-worker-zc5w8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:02:41.743: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:02:41.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9255" for this suite. • [SLOW TEST:18.107 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":54,"skipped":3451,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:02:41.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-b46a8613-86b3-49b1-9a58-6f9cfca76cc2 STEP: Creating a pod to test consume configMaps Mar 25 18:02:41.982: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff" in namespace "projected-2237" to be "Succeeded or Failed" Mar 25 18:02:41.991: INFO: Pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909505ms Mar 25 18:02:43.994: INFO: Pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012263379s Mar 25 18:02:45.999: INFO: Pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016914426s Mar 25 18:02:48.005: INFO: Pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022872691s STEP: Saw pod success Mar 25 18:02:48.005: INFO: Pod "pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff" satisfied condition "Succeeded or Failed" Mar 25 18:02:48.008: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff container agnhost-container: STEP: delete the pod Mar 25 18:02:48.099: INFO: Waiting for pod pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff to disappear Mar 25 18:02:48.117: INFO: Pod pod-projected-configmaps-40765d29-a983-4135-a628-c785d72d6dff no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:02:48.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2237" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":55,"skipped":3458,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:02:48.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-4868 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:02:48.285: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-attacher Mar 25 18:02:48.287: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4868 Mar 25 18:02:48.287: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4868 Mar 25 18:02:48.320: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4868 Mar 25 18:02:48.323: INFO: creating *v1.Role: csi-mock-volumes-4868-281/external-attacher-cfg-csi-mock-volumes-4868 Mar 25 18:02:48.351: INFO: creating *v1.RoleBinding: csi-mock-volumes-4868-281/csi-attacher-role-cfg Mar 25 18:02:48.372: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-provisioner Mar 25 18:02:48.458: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4868 Mar 25 18:02:48.458: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4868 Mar 25 18:02:48.463: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4868 Mar 25 18:02:48.470: INFO: creating *v1.Role: csi-mock-volumes-4868-281/external-provisioner-cfg-csi-mock-volumes-4868 Mar 25 18:02:48.486: INFO: creating *v1.RoleBinding: csi-mock-volumes-4868-281/csi-provisioner-role-cfg Mar 25 18:02:48.499: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-resizer Mar 25 18:02:48.516: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4868 Mar 25 18:02:48.516: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4868 Mar 25 18:02:48.530: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4868 Mar 25 18:02:48.542: INFO: creating *v1.Role: csi-mock-volumes-4868-281/external-resizer-cfg-csi-mock-volumes-4868 Mar 25 18:02:48.602: INFO: creating *v1.RoleBinding: csi-mock-volumes-4868-281/csi-resizer-role-cfg Mar 25 18:02:48.606: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-snapshotter Mar 25 18:02:48.619: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4868 Mar 25 18:02:48.619: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4868 Mar 25 18:02:48.625: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4868 Mar 25 18:02:48.631: INFO: creating *v1.Role: csi-mock-volumes-4868-281/external-snapshotter-leaderelection-csi-mock-volumes-4868 Mar 25 18:02:48.678: INFO: creating *v1.RoleBinding: csi-mock-volumes-4868-281/external-snapshotter-leaderelection Mar 25 18:02:48.728: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-mock Mar 25 18:02:48.739: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4868 Mar 25 18:02:48.745: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4868 Mar 25 18:02:48.751: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4868 Mar 25 18:02:48.768: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4868 Mar 25 18:02:48.781: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4868 Mar 25 18:02:48.858: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4868 Mar 25 18:02:48.863: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4868 Mar 25 18:02:48.867: INFO: creating *v1.StatefulSet: csi-mock-volumes-4868-281/csi-mockplugin Mar 25 18:02:48.872: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4868 Mar 25 18:02:48.894: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4868" Mar 25 18:02:48.917: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4868 to register on node latest-worker2 STEP: Creating pod with fsGroup Mar 25 18:03:03.532: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:03:03.562: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-w5w5n] to have phase Bound Mar 25 18:03:03.574: INFO: PersistentVolumeClaim pvc-w5w5n found but phase is Pending instead of Bound. Mar 25 18:03:05.578: INFO: PersistentVolumeClaim pvc-w5w5n found and phase=Bound (2.015717905s) Mar 25 18:03:09.879: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-4868] Namespace:csi-mock-volumes-4868 PodName:pvc-volume-tester-xwxh8 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:03:09.879: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:03:09.975: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-4868/csi-mock-volumes-4868'; sync] Namespace:csi-mock-volumes-4868 PodName:pvc-volume-tester-xwxh8 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:03:09.975: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:03:50.311: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-4868/csi-mock-volumes-4868] Namespace:csi-mock-volumes-4868 PodName:pvc-volume-tester-xwxh8 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:03:50.311: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:03:50.420: INFO: pod csi-mock-volumes-4868/pvc-volume-tester-xwxh8 exec for cmd ls -l /mnt/test/csi-mock-volumes-4868/csi-mock-volumes-4868, stdout: -rw-r--r-- 1 root 18278 13 Mar 25 18:03 /mnt/test/csi-mock-volumes-4868/csi-mock-volumes-4868, stderr: Mar 25 18:03:50.420: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-4868] Namespace:csi-mock-volumes-4868 PodName:pvc-volume-tester-xwxh8 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:03:50.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-xwxh8 Mar 25 18:03:50.502: INFO: Deleting pod "pvc-volume-tester-xwxh8" in namespace "csi-mock-volumes-4868" Mar 25 18:03:50.509: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xwxh8" to be fully deleted STEP: Deleting claim pvc-w5w5n Mar 25 18:04:26.585: INFO: Waiting up to 2m0s for PersistentVolume pvc-9ef550d3-2f6c-4ffa-b847-ea81c9d7d5d8 to get deleted Mar 25 18:04:26.593: INFO: PersistentVolume pvc-9ef550d3-2f6c-4ffa-b847-ea81c9d7d5d8 found and phase=Bound (7.829496ms) Mar 25 18:04:28.598: INFO: PersistentVolume pvc-9ef550d3-2f6c-4ffa-b847-ea81c9d7d5d8 was removed STEP: Deleting storageclass csi-mock-volumes-4868-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4868 STEP: Waiting for namespaces [csi-mock-volumes-4868] to vanish STEP: uninstalling csi mock driver Mar 25 18:04:34.623: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-attacher Mar 25 18:04:34.630: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4868 Mar 25 18:04:34.641: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4868 Mar 25 18:04:34.648: INFO: deleting *v1.Role: csi-mock-volumes-4868-281/external-attacher-cfg-csi-mock-volumes-4868 Mar 25 18:04:34.654: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4868-281/csi-attacher-role-cfg Mar 25 18:04:34.673: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-provisioner Mar 25 18:04:34.701: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4868 Mar 25 18:04:34.719: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4868 Mar 25 18:04:34.727: INFO: deleting *v1.Role: csi-mock-volumes-4868-281/external-provisioner-cfg-csi-mock-volumes-4868 Mar 25 18:04:34.732: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4868-281/csi-provisioner-role-cfg Mar 25 18:04:34.738: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-resizer Mar 25 18:04:34.744: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4868 Mar 25 18:04:34.750: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4868 Mar 25 18:04:34.761: INFO: deleting *v1.Role: csi-mock-volumes-4868-281/external-resizer-cfg-csi-mock-volumes-4868 Mar 25 18:04:34.786: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4868-281/csi-resizer-role-cfg Mar 25 18:04:34.830: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-snapshotter Mar 25 18:04:34.836: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4868 Mar 25 18:04:34.841: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4868 Mar 25 18:04:34.851: INFO: deleting *v1.Role: csi-mock-volumes-4868-281/external-snapshotter-leaderelection-csi-mock-volumes-4868 Mar 25 18:04:34.883: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4868-281/external-snapshotter-leaderelection Mar 25 18:04:34.894: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4868-281/csi-mock Mar 25 18:04:34.906: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4868 Mar 25 18:04:34.949: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4868 Mar 25 18:04:34.966: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4868 Mar 25 18:04:34.978: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4868 Mar 25 18:04:34.990: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4868 Mar 25 18:04:34.996: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4868 Mar 25 18:04:35.014: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4868 Mar 25 18:04:35.020: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4868-281/csi-mockplugin Mar 25 18:04:35.027: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4868 STEP: deleting the driver namespace: csi-mock-volumes-4868-281 STEP: Waiting for namespaces [csi-mock-volumes-4868-281] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:05:31.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:162.924 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":115,"completed":56,"skipped":3486,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:05:31.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-8653 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:05:31.230: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-attacher Mar 25 18:05:31.234: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:05:31.234: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:05:31.237: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8653 Mar 25 18:05:31.240: INFO: creating *v1.Role: csi-mock-volumes-8653-4337/external-attacher-cfg-csi-mock-volumes-8653 Mar 25 18:05:31.263: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-attacher-role-cfg Mar 25 18:05:31.310: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-provisioner Mar 25 18:05:31.322: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:05:31.322: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:05:31.338: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8653 Mar 25 18:05:31.370: INFO: creating *v1.Role: csi-mock-volumes-8653-4337/external-provisioner-cfg-csi-mock-volumes-8653 Mar 25 18:05:31.387: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-provisioner-role-cfg Mar 25 18:05:31.396: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-resizer Mar 25 18:05:31.402: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:05:31.402: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:05:31.408: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8653 Mar 25 18:05:31.429: INFO: creating *v1.Role: csi-mock-volumes-8653-4337/external-resizer-cfg-csi-mock-volumes-8653 Mar 25 18:05:31.444: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-resizer-role-cfg Mar 25 18:05:31.458: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-snapshotter Mar 25 18:05:31.474: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:05:31.474: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:05:31.480: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:05:31.486: INFO: creating *v1.Role: csi-mock-volumes-8653-4337/external-snapshotter-leaderelection-csi-mock-volumes-8653 Mar 25 18:05:31.492: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-4337/external-snapshotter-leaderelection Mar 25 18:05:31.597: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-mock Mar 25 18:05:31.609: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8653 Mar 25 18:05:31.636: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8653 Mar 25 18:05:31.642: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:05:31.657: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:05:31.666: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8653 Mar 25 18:05:31.686: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:05:31.729: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8653 Mar 25 18:05:31.733: INFO: creating *v1.StatefulSet: csi-mock-volumes-8653-4337/csi-mockplugin Mar 25 18:05:31.750: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8653 Mar 25 18:05:31.782: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8653" Mar 25 18:05:31.798: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8653 to register on node latest-worker2 STEP: Creating pod Mar 25 18:05:41.319: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:05:41.328: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-znmzs] to have phase Bound Mar 25 18:05:41.350: INFO: PersistentVolumeClaim pvc-znmzs found but phase is Pending instead of Bound. Mar 25 18:05:43.364: INFO: PersistentVolumeClaim pvc-znmzs found and phase=Bound (2.035888343s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-x2qvq Mar 25 18:05:47.455: INFO: Deleting pod "pvc-volume-tester-x2qvq" in namespace "csi-mock-volumes-8653" Mar 25 18:05:47.461: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x2qvq" to be fully deleted STEP: Deleting claim pvc-znmzs Mar 25 18:05:55.529: INFO: Waiting up to 2m0s for PersistentVolume pvc-9119bee6-53c1-4075-af23-d894473553f0 to get deleted Mar 25 18:05:55.573: INFO: PersistentVolume pvc-9119bee6-53c1-4075-af23-d894473553f0 found and phase=Bound (44.184823ms) Mar 25 18:05:57.577: INFO: PersistentVolume pvc-9119bee6-53c1-4075-af23-d894473553f0 was removed STEP: Deleting storageclass csi-mock-volumes-8653-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8653 STEP: Waiting for namespaces [csi-mock-volumes-8653] to vanish STEP: uninstalling csi mock driver Mar 25 18:06:03.603: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-attacher Mar 25 18:06:03.608: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:06:03.636: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8653 Mar 25 18:06:03.670: INFO: deleting *v1.Role: csi-mock-volumes-8653-4337/external-attacher-cfg-csi-mock-volumes-8653 Mar 25 18:06:03.678: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-attacher-role-cfg Mar 25 18:06:03.683: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-provisioner Mar 25 18:06:03.687: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:06:03.693: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8653 Mar 25 18:06:03.719: INFO: deleting *v1.Role: csi-mock-volumes-8653-4337/external-provisioner-cfg-csi-mock-volumes-8653 Mar 25 18:06:03.724: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-provisioner-role-cfg Mar 25 18:06:03.729: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-resizer Mar 25 18:06:03.736: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:06:03.766: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8653 Mar 25 18:06:03.778: INFO: deleting *v1.Role: csi-mock-volumes-8653-4337/external-resizer-cfg-csi-mock-volumes-8653 Mar 25 18:06:03.784: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-4337/csi-resizer-role-cfg Mar 25 18:06:03.789: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-snapshotter Mar 25 18:06:03.796: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:06:03.805: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:06:03.813: INFO: deleting *v1.Role: csi-mock-volumes-8653-4337/external-snapshotter-leaderelection-csi-mock-volumes-8653 Mar 25 18:06:03.839: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-4337/external-snapshotter-leaderelection Mar 25 18:06:03.844: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-4337/csi-mock Mar 25 18:06:03.849: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8653 Mar 25 18:06:03.878: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8653 Mar 25 18:06:03.885: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:06:03.892: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:06:03.897: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8653 Mar 25 18:06:03.903: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:06:03.909: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8653 Mar 25 18:06:03.915: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8653-4337/csi-mockplugin Mar 25 18:06:03.921: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8653 STEP: deleting the driver namespace: csi-mock-volumes-8653-4337 STEP: Waiting for namespaces [csi-mock-volumes-8653-4337] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:06:32.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.973 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":115,"completed":57,"skipped":3607,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:06:32.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 25 18:06:34.190: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-60934c67-90a7-48f8-bfd0-67b654e66b85] Namespace:persistent-local-volumes-test-7087 PodName:hostexec-latest-worker-tsxg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:06:34.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:06:34.306: INFO: Creating a PV followed by a PVC Mar 25 18:06:34.329: INFO: Waiting for PV local-pv47t7q to bind to PVC pvc-rrn45 Mar 25 18:06:34.330: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rrn45] to have phase Bound Mar 25 18:06:34.346: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:36.352: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:38.356: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:40.361: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:42.366: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:44.370: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:46.376: INFO: PersistentVolumeClaim pvc-rrn45 found but phase is Pending instead of Bound. Mar 25 18:06:48.379: INFO: PersistentVolumeClaim pvc-rrn45 found and phase=Bound (14.049673123s) Mar 25 18:06:48.379: INFO: Waiting up to 3m0s for PersistentVolume local-pv47t7q to have phase Bound Mar 25 18:06:48.382: INFO: PersistentVolume local-pv47t7q found and phase=Bound (2.380447ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 25 18:06:48.386: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-736168e4-bae2-46f4-bbed-de2d6c70c2f7] Namespace:persistent-local-volumes-test-7087 PodName:hostexec-latest-worker-tsxg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:06:48.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:06:48.503: INFO: Creating a PV followed by a PVC Mar 25 18:06:48.539: INFO: Waiting for PV local-pvz84tl to bind to PVC pvc-76vdc Mar 25 18:06:48.539: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-76vdc] to have phase Bound Mar 25 18:06:48.574: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:06:50.578: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:06:52.583: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:06:54.587: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:06:56.593: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:06:58.598: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:07:00.604: INFO: PersistentVolumeClaim pvc-76vdc found but phase is Pending instead of Bound. Mar 25 18:07:02.616: INFO: PersistentVolumeClaim pvc-76vdc found and phase=Bound (14.077101293s) Mar 25 18:07:02.616: INFO: Waiting up to 3m0s for PersistentVolume local-pvz84tl to have phase Bound Mar 25 18:07:02.619: INFO: PersistentVolume local-pvz84tl found and phase=Bound (2.958361ms) Mar 25 18:07:02.630: INFO: Waiting up to 5m0s for pod "pod-cf8f5dea-f953-4a54-8cf6-31074b668779" in namespace "persistent-local-volumes-test-7087" to be "Unschedulable" Mar 25 18:07:02.649: INFO: Pod "pod-cf8f5dea-f953-4a54-8cf6-31074b668779": Phase="Pending", Reason="", readiness=false. Elapsed: 19.457625ms Mar 25 18:07:04.654: INFO: Pod "pod-cf8f5dea-f953-4a54-8cf6-31074b668779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02421235s Mar 25 18:07:04.654: INFO: Pod "pod-cf8f5dea-f953-4a54-8cf6-31074b668779" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 25 18:07:04.654: INFO: Deleting PersistentVolumeClaim "pvc-rrn45" Mar 25 18:07:04.658: INFO: Deleting PersistentVolume "local-pv47t7q" STEP: Removing the test directory Mar 25 18:07:04.665: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-60934c67-90a7-48f8-bfd0-67b654e66b85] Namespace:persistent-local-volumes-test-7087 PodName:hostexec-latest-worker-tsxg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:07:04.665: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:07:04.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7087" for this suite. • [SLOW TEST:32.769 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":115,"completed":58,"skipped":3706,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:07:04.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-9041 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:07:05.425: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-attacher Mar 25 18:07:05.442: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9041 Mar 25 18:07:05.442: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9041 Mar 25 18:07:05.446: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9041 Mar 25 18:07:05.454: INFO: creating *v1.Role: csi-mock-volumes-9041-8350/external-attacher-cfg-csi-mock-volumes-9041 Mar 25 18:07:05.460: INFO: creating *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-attacher-role-cfg Mar 25 18:07:05.478: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-provisioner Mar 25 18:07:05.493: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9041 Mar 25 18:07:05.493: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9041 Mar 25 18:07:05.497: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9041 Mar 25 18:07:05.520: INFO: creating *v1.Role: csi-mock-volumes-9041-8350/external-provisioner-cfg-csi-mock-volumes-9041 Mar 25 18:07:05.580: INFO: creating *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-provisioner-role-cfg Mar 25 18:07:05.585: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-resizer Mar 25 18:07:05.597: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9041 Mar 25 18:07:05.597: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9041 Mar 25 18:07:05.603: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9041 Mar 25 18:07:05.609: INFO: creating *v1.Role: csi-mock-volumes-9041-8350/external-resizer-cfg-csi-mock-volumes-9041 Mar 25 18:07:05.615: INFO: creating *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-resizer-role-cfg Mar 25 18:07:05.636: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-snapshotter Mar 25 18:07:05.654: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9041 Mar 25 18:07:05.654: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9041 Mar 25 18:07:05.663: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9041 Mar 25 18:07:05.669: INFO: creating *v1.Role: csi-mock-volumes-9041-8350/external-snapshotter-leaderelection-csi-mock-volumes-9041 Mar 25 18:07:05.675: INFO: creating *v1.RoleBinding: csi-mock-volumes-9041-8350/external-snapshotter-leaderelection Mar 25 18:07:05.718: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-mock Mar 25 18:07:05.723: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9041 Mar 25 18:07:05.729: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9041 Mar 25 18:07:05.760: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9041 Mar 25 18:07:05.783: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9041 Mar 25 18:07:05.862: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9041 Mar 25 18:07:05.867: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9041 Mar 25 18:07:05.872: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9041 Mar 25 18:07:05.900: INFO: creating *v1.StatefulSet: csi-mock-volumes-9041-8350/csi-mockplugin Mar 25 18:07:05.921: INFO: creating *v1.StatefulSet: csi-mock-volumes-9041-8350/csi-mockplugin-attacher Mar 25 18:07:05.946: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9041 to register on node latest-worker2 STEP: Creating pod Mar 25 18:07:15.575: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:07:15.583: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-znx4d] to have phase Bound Mar 25 18:07:15.587: INFO: PersistentVolumeClaim pvc-znx4d found but phase is Pending instead of Bound. Mar 25 18:07:17.592: INFO: PersistentVolumeClaim pvc-znx4d found and phase=Bound (2.00837755s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-86hbj Mar 25 18:07:23.659: INFO: Deleting pod "pvc-volume-tester-86hbj" in namespace "csi-mock-volumes-9041" Mar 25 18:07:23.664: INFO: Wait up to 5m0s for pod "pvc-volume-tester-86hbj" to be fully deleted STEP: Deleting claim pvc-znx4d Mar 25 18:08:27.713: INFO: Waiting up to 2m0s for PersistentVolume pvc-e313f5ca-27c2-4406-90be-7a1f0911c3e3 to get deleted Mar 25 18:08:27.734: INFO: PersistentVolume pvc-e313f5ca-27c2-4406-90be-7a1f0911c3e3 found and phase=Bound (20.963199ms) Mar 25 18:08:29.738: INFO: PersistentVolume pvc-e313f5ca-27c2-4406-90be-7a1f0911c3e3 was removed STEP: Deleting storageclass csi-mock-volumes-9041-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9041 STEP: Waiting for namespaces [csi-mock-volumes-9041] to vanish STEP: uninstalling csi mock driver Mar 25 18:08:35.762: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-attacher Mar 25 18:08:35.770: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9041 Mar 25 18:08:35.781: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9041 Mar 25 18:08:35.787: INFO: deleting *v1.Role: csi-mock-volumes-9041-8350/external-attacher-cfg-csi-mock-volumes-9041 Mar 25 18:08:35.817: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-attacher-role-cfg Mar 25 18:08:35.830: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-provisioner Mar 25 18:08:35.835: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9041 Mar 25 18:08:35.851: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9041 Mar 25 18:08:35.865: INFO: deleting *v1.Role: csi-mock-volumes-9041-8350/external-provisioner-cfg-csi-mock-volumes-9041 Mar 25 18:08:35.892: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-provisioner-role-cfg Mar 25 18:08:35.942: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-resizer Mar 25 18:08:35.954: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9041 Mar 25 18:08:35.960: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9041 Mar 25 18:08:35.985: INFO: deleting *v1.Role: csi-mock-volumes-9041-8350/external-resizer-cfg-csi-mock-volumes-9041 Mar 25 18:08:36.002: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9041-8350/csi-resizer-role-cfg Mar 25 18:08:36.008: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-snapshotter Mar 25 18:08:36.025: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9041 Mar 25 18:08:36.084: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9041 Mar 25 18:08:36.089: INFO: deleting *v1.Role: csi-mock-volumes-9041-8350/external-snapshotter-leaderelection-csi-mock-volumes-9041 Mar 25 18:08:36.101: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9041-8350/external-snapshotter-leaderelection Mar 25 18:08:36.105: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9041-8350/csi-mock Mar 25 18:08:36.113: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9041 Mar 25 18:08:36.120: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9041 Mar 25 18:08:36.128: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9041 Mar 25 18:08:36.134: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9041 Mar 25 18:08:36.140: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9041 Mar 25 18:08:36.146: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9041 Mar 25 18:08:36.168: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9041 Mar 25 18:08:36.182: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9041-8350/csi-mockplugin Mar 25 18:08:36.199: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9041-8350/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9041-8350 STEP: Waiting for namespaces [csi-mock-volumes-9041-8350] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:09:32.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:147.456 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":115,"completed":59,"skipped":3750,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Flexvolumes should be mountable when non-attachable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:09:32.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Mar 25 18:09:32.318: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:09:32.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-7540" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.074 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:09:32.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:09:34.448: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend && mount --bind /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend && ln -s /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204] Namespace:persistent-local-volumes-test-7746 PodName:hostexec-latest-worker-pwqf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:09:34.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:09:34.564: INFO: Creating a PV followed by a PVC Mar 25 18:09:34.580: INFO: Waiting for PV local-pv4zkzv to bind to PVC pvc-kb7k2 Mar 25 18:09:34.580: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-kb7k2] to have phase Bound Mar 25 18:09:34.630: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:36.637: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:38.641: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:40.645: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:42.651: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:44.656: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:46.661: INFO: PersistentVolumeClaim pvc-kb7k2 found but phase is Pending instead of Bound. Mar 25 18:09:48.666: INFO: PersistentVolumeClaim pvc-kb7k2 found and phase=Bound (14.08606048s) Mar 25 18:09:48.666: INFO: Waiting up to 3m0s for PersistentVolume local-pv4zkzv to have phase Bound Mar 25 18:09:48.669: INFO: PersistentVolume local-pv4zkzv found and phase=Bound (2.900917ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:09:52.709: INFO: pod "pod-fbc1817a-6fea-48e9-bf12-1ba810a9ef9b" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:09:52.710: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7746 PodName:pod-fbc1817a-6fea-48e9-bf12-1ba810a9ef9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:09:52.710: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:09:52.843: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 18:09:52.843: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7746 PodName:pod-fbc1817a-6fea-48e9-bf12-1ba810a9ef9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:09:52.843: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:09:52.952: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 18:09:52.952: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7746 PodName:pod-fbc1817a-6fea-48e9-bf12-1ba810a9ef9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:09:52.952: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:09:53.046: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fbc1817a-6fea-48e9-bf12-1ba810a9ef9b in namespace persistent-local-volumes-test-7746 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:09:53.099: INFO: Deleting PersistentVolumeClaim "pvc-kb7k2" Mar 25 18:09:53.106: INFO: Deleting PersistentVolume "local-pv4zkzv" STEP: Removing the test directory Mar 25 18:09:53.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204 && umount /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend && rm -r /tmp/local-volume-test-d3a921c1-6904-454f-acd3-bea4e007b204-backend] Namespace:persistent-local-volumes-test-7746 PodName:hostexec-latest-worker-pwqf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:09:53.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:09:53.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7746" for this suite. • [SLOW TEST:20.967 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":60,"skipped":3881,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Mounted volume expand Should verify mounted devices can be resized /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:09:53.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:59 Mar 25 18:09:53.406: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:09:53.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-929" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:105 Mar 25 18:09:53.423: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.126 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:09:53.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20" Mar 25 18:09:57.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20 && dd if=/dev/zero of=/tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20/file] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:09:57.600: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:09:57.755: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:09:57.755: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:09:57.843: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20 && chmod o+rwx /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:09:57.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:09:58.274: INFO: Creating a PV followed by a PVC Mar 25 18:09:58.397: INFO: Waiting for PV local-pv7jpvm to bind to PVC pvc-xhpcr Mar 25 18:09:58.397: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xhpcr] to have phase Bound Mar 25 18:09:58.402: INFO: PersistentVolumeClaim pvc-xhpcr found but phase is Pending instead of Bound. Mar 25 18:10:00.407: INFO: PersistentVolumeClaim pvc-xhpcr found but phase is Pending instead of Bound. Mar 25 18:10:02.412: INFO: PersistentVolumeClaim pvc-xhpcr found but phase is Pending instead of Bound. Mar 25 18:10:04.422: INFO: PersistentVolumeClaim pvc-xhpcr found and phase=Bound (6.025487595s) Mar 25 18:10:04.422: INFO: Waiting up to 3m0s for PersistentVolume local-pv7jpvm to have phase Bound Mar 25 18:10:04.424: INFO: PersistentVolume local-pv7jpvm found and phase=Bound (1.89626ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:10:11.013: INFO: pod "pod-691ef504-1f36-4fcf-8d9c-0b18dd798df0" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:10:11.013: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4092 PodName:pod-691ef504-1f36-4fcf-8d9c-0b18dd798df0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:10:11.013: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:11.135: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:10:11.135: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4092 PodName:pod-691ef504-1f36-4fcf-8d9c-0b18dd798df0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:10:11.135: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:11.266: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-691ef504-1f36-4fcf-8d9c-0b18dd798df0 in namespace persistent-local-volumes-test-4092 STEP: Creating pod2 STEP: Creating a pod Mar 25 18:10:17.456: INFO: pod "pod-f5bb4b73-6307-47d1-bb3c-508aa4e66962" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 18:10:17.456: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4092 PodName:pod-f5bb4b73-6307-47d1-bb3c-508aa4e66962 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:10:17.456: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:17.710: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-f5bb4b73-6307-47d1-bb3c-508aa4e66962 in namespace persistent-local-volumes-test-4092 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:10:17.715: INFO: Deleting PersistentVolumeClaim "pvc-xhpcr" Mar 25 18:10:17.780: INFO: Deleting PersistentVolume "local-pv7jpvm" Mar 25 18:10:17.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:17.873: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:18.024: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:18.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20/file Mar 25 18:10:18.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:18.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20 Mar 25 18:10:18.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9cbf26f3-5ecd-4005-a7a4-cafa57d6af20] Namespace:persistent-local-volumes-test-4092 PodName:hostexec-latest-worker2-64wpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:18.312: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:10:18.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4092" for this suite. • [SLOW TEST:25.201 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":61,"skipped":4021,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:10:18.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Mar 25 18:10:19.861: INFO: Waiting up to 5m0s for pod "pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9" in namespace "emptydir-1554" to be "Succeeded or Failed" Mar 25 18:10:20.038: INFO: Pod "pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9": Phase="Pending", Reason="", readiness=false. Elapsed: 176.994947ms Mar 25 18:10:22.043: INFO: Pod "pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18170515s Mar 25 18:10:24.067: INFO: Pod "pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206261167s STEP: Saw pod success Mar 25 18:10:24.067: INFO: Pod "pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9" satisfied condition "Succeeded or Failed" Mar 25 18:10:24.082: INFO: Trying to get logs from node latest-worker pod pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9 container test-container: STEP: delete the pod Mar 25 18:10:24.148: INFO: Waiting for pod pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9 to disappear Mar 25 18:10:24.192: INFO: Pod pod-21ed3096-6f6b-4032-8f8a-61e50f0631e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:10:24.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1554" for this suite. • [SLOW TEST:5.583 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":115,"completed":62,"skipped":4063,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:10:24.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-1f7f71d7-d17a-4d42-9402-d121887eef82 STEP: Creating a pod to test consume configMaps Mar 25 18:10:24.407: INFO: Waiting up to 5m0s for pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241" in namespace "configmap-4922" to be "Succeeded or Failed" Mar 25 18:10:24.454: INFO: Pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241": Phase="Pending", Reason="", readiness=false. Elapsed: 46.052267ms Mar 25 18:10:26.462: INFO: Pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054282595s Mar 25 18:10:28.469: INFO: Pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241": Phase="Running", Reason="", readiness=true. Elapsed: 4.061904139s Mar 25 18:10:30.475: INFO: Pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067469434s STEP: Saw pod success Mar 25 18:10:30.475: INFO: Pod "pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241" satisfied condition "Succeeded or Failed" Mar 25 18:10:30.478: INFO: Trying to get logs from node latest-worker pod pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241 container agnhost-container: STEP: delete the pod Mar 25 18:10:30.507: INFO: Waiting for pod pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241 to disappear Mar 25 18:10:30.516: INFO: Pod pod-configmaps-08a244ab-2f40-49d6-bb7b-4136a93fa241 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:10:30.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4922" for this suite. • [SLOW TEST:6.314 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":63,"skipped":4132,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:10:30.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:10:32.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9e311ee1-ee99-4581-8e14-a4ac463b1802] Namespace:persistent-local-volumes-test-3915 PodName:hostexec-latest-worker-fpczt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:32.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:10:32.813: INFO: Creating a PV followed by a PVC Mar 25 18:10:32.825: INFO: Waiting for PV local-pv8fxcp to bind to PVC pvc-kqx2r Mar 25 18:10:32.825: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-kqx2r] to have phase Bound Mar 25 18:10:32.841: INFO: PersistentVolumeClaim pvc-kqx2r found but phase is Pending instead of Bound. Mar 25 18:10:34.847: INFO: PersistentVolumeClaim pvc-kqx2r found and phase=Bound (2.02142862s) Mar 25 18:10:34.847: INFO: Waiting up to 3m0s for PersistentVolume local-pv8fxcp to have phase Bound Mar 25 18:10:34.852: INFO: PersistentVolume local-pv8fxcp found and phase=Bound (4.967076ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:10:38.896: INFO: pod "pod-638616c0-ddcd-45b0-9662-2199194d344a" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:10:38.896: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3915 PodName:pod-638616c0-ddcd-45b0-9662-2199194d344a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:10:38.896: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:39.019: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 18:10:39.019: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3915 PodName:pod-638616c0-ddcd-45b0-9662-2199194d344a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:10:39.019: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:10:39.119: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-638616c0-ddcd-45b0-9662-2199194d344a in namespace persistent-local-volumes-test-3915 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:10:39.158: INFO: Deleting PersistentVolumeClaim "pvc-kqx2r" Mar 25 18:10:39.198: INFO: Deleting PersistentVolume "local-pv8fxcp" STEP: Removing the test directory Mar 25 18:10:39.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9e311ee1-ee99-4581-8e14-a4ac463b1802] Namespace:persistent-local-volumes-test-3915 PodName:hostexec-latest-worker-fpczt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:10:39.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:10:39.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3915" for this suite. • [SLOW TEST:8.877 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":64,"skipped":4211,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:10:39.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 18:10:39.530: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:10:39.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8161" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 18:10:39.551: INFO: AfterEach: Cleaning up test resources Mar 25 18:10:39.551: INFO: pvc is nil Mar 25 18:10:39.551: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.148 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:10:39.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-8653 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:10:39.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-attacher Mar 25 18:10:39.749: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:10:39.749: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:10:39.756: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8653 Mar 25 18:10:39.762: INFO: creating *v1.Role: csi-mock-volumes-8653-3070/external-attacher-cfg-csi-mock-volumes-8653 Mar 25 18:10:39.784: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-attacher-role-cfg Mar 25 18:10:39.798: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-provisioner Mar 25 18:10:39.804: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:10:39.804: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:10:39.830: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8653 Mar 25 18:10:39.851: INFO: creating *v1.Role: csi-mock-volumes-8653-3070/external-provisioner-cfg-csi-mock-volumes-8653 Mar 25 18:10:39.868: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-provisioner-role-cfg Mar 25 18:10:39.882: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-resizer Mar 25 18:10:39.900: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:10:39.900: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:10:39.936: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8653 Mar 25 18:10:39.948: INFO: creating *v1.Role: csi-mock-volumes-8653-3070/external-resizer-cfg-csi-mock-volumes-8653 Mar 25 18:10:39.983: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-resizer-role-cfg Mar 25 18:10:40.018: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-snapshotter Mar 25 18:10:40.032: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:10:40.032: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:10:40.050: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:10:40.062: INFO: creating *v1.Role: csi-mock-volumes-8653-3070/external-snapshotter-leaderelection-csi-mock-volumes-8653 Mar 25 18:10:40.181: INFO: creating *v1.RoleBinding: csi-mock-volumes-8653-3070/external-snapshotter-leaderelection Mar 25 18:10:40.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-mock Mar 25 18:10:40.221: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8653 Mar 25 18:10:40.241: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8653 Mar 25 18:10:40.253: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:10:40.313: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:10:40.317: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8653 Mar 25 18:10:40.352: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:10:40.367: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8653 Mar 25 18:10:40.374: INFO: creating *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin Mar 25 18:10:40.383: INFO: creating *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin-attacher Mar 25 18:10:40.462: INFO: creating *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin-resizer Mar 25 18:10:40.474: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8653 to register on node latest-worker2 STEP: Creating pod Mar 25 18:10:50.132: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:10:50.151: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-zfwmt] to have phase Bound Mar 25 18:10:50.156: INFO: PersistentVolumeClaim pvc-zfwmt found but phase is Pending instead of Bound. Mar 25 18:10:52.368: INFO: PersistentVolumeClaim pvc-zfwmt found and phase=Bound (2.217022371s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-v8cvz Mar 25 18:11:08.615: INFO: Deleting pod "pvc-volume-tester-v8cvz" in namespace "csi-mock-volumes-8653" Mar 25 18:11:08.625: INFO: Wait up to 5m0s for pod "pvc-volume-tester-v8cvz" to be fully deleted STEP: Deleting claim pvc-zfwmt Mar 25 18:11:16.663: INFO: Waiting up to 2m0s for PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b to get deleted Mar 25 18:11:16.686: INFO: PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b found and phase=Bound (23.608411ms) Mar 25 18:11:18.810: INFO: PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b found and phase=Released (2.147391032s) Mar 25 18:11:20.852: INFO: PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b found and phase=Released (4.188938711s) Mar 25 18:11:22.856: INFO: PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b found and phase=Released (6.193600441s) Mar 25 18:11:24.861: INFO: PersistentVolume pvc-33a2cf81-fb67-4208-8f49-4dba65251e2b was removed STEP: Deleting storageclass csi-mock-volumes-8653-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8653 STEP: Waiting for namespaces [csi-mock-volumes-8653] to vanish STEP: uninstalling csi mock driver Mar 25 18:11:30.880: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-attacher Mar 25 18:11:30.885: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8653 Mar 25 18:11:30.893: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8653 Mar 25 18:11:30.903: INFO: deleting *v1.Role: csi-mock-volumes-8653-3070/external-attacher-cfg-csi-mock-volumes-8653 Mar 25 18:11:30.910: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-attacher-role-cfg Mar 25 18:11:30.916: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-provisioner Mar 25 18:11:30.921: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8653 Mar 25 18:11:30.939: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8653 Mar 25 18:11:30.957: INFO: deleting *v1.Role: csi-mock-volumes-8653-3070/external-provisioner-cfg-csi-mock-volumes-8653 Mar 25 18:11:30.964: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-provisioner-role-cfg Mar 25 18:11:30.971: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-resizer Mar 25 18:11:30.996: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8653 Mar 25 18:11:31.019: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8653 Mar 25 18:11:31.047: INFO: deleting *v1.Role: csi-mock-volumes-8653-3070/external-resizer-cfg-csi-mock-volumes-8653 Mar 25 18:11:31.084: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-3070/csi-resizer-role-cfg Mar 25 18:11:31.111: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-snapshotter Mar 25 18:11:31.139: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8653 Mar 25 18:11:31.168: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:11:31.179: INFO: deleting *v1.Role: csi-mock-volumes-8653-3070/external-snapshotter-leaderelection-csi-mock-volumes-8653 Mar 25 18:11:31.186: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8653-3070/external-snapshotter-leaderelection Mar 25 18:11:31.192: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8653-3070/csi-mock Mar 25 18:11:31.198: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8653 Mar 25 18:11:31.223: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8653 Mar 25 18:11:31.233: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:11:31.251: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8653 Mar 25 18:11:31.264: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8653 Mar 25 18:11:31.270: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8653 Mar 25 18:11:31.276: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8653 Mar 25 18:11:31.281: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin Mar 25 18:11:31.288: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin-attacher Mar 25 18:11:31.293: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8653-3070/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-8653-3070 STEP: Waiting for namespaces [csi-mock-volumes-8653-3070] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:12:29.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:109.861 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":115,"completed":65,"skipped":4337,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:12:29.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Mar 25 18:12:29.492: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-440" to be "Succeeded or Failed" Mar 25 18:12:29.510: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.835272ms Mar 25 18:12:31.518: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026351463s Mar 25 18:12:33.522: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030300683s Mar 25 18:12:35.527: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035343427s STEP: Saw pod success Mar 25 18:12:35.528: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 18:12:35.530: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 25 18:12:35.588: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 18:12:35.608: INFO: Pod pod-host-path-test no longer exists Mar 25 18:12:35.608: FAIL: Unexpected error: <*errors.errorString | 0xc002a9ef80>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000e4e160, 0x6b6efc8, 0xd, 0xc00329dc00, 0x0, 0xc00559f1c0, 0x1, 0x1, 0x6d64568) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:564 k8s.io/kubernetes/test/e2e/common/storage.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:59 +0x299 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00331cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "hostpath-440". STEP: Found 7 events. Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:29 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-440/pod-host-path-test to latest-worker2 Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:30 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:31 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Created: Created container test-container-1 Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:31 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Started: Started container test-container-1 Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:32 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:33 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Created: Created container test-container-2 Mar 25 18:12:35.614: INFO: At 2021-03-25 18:12:33 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker2} Started: Started container test-container-2 Mar 25 18:12:35.637: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 18:12:35.637: INFO: Mar 25 18:12:35.641: INFO: Logging node info for node latest-control-plane Mar 25 18:12:35.645: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1286163 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 18:09:49 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 18:09:49 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 18:09:49 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 18:09:49 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 18:12:35.645: INFO: Logging kubelet events for node latest-control-plane Mar 25 18:12:35.648: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 18:12:35.673: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container etcd ready: true, restart count 0 Mar 25 18:12:35.673: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 18:12:35.673: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 18:12:35.673: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 18:12:35.673: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container coredns ready: true, restart count 0 Mar 25 18:12:35.673: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container coredns ready: true, restart count 0 Mar 25 18:12:35.673: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 18:12:35.673: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 18:12:35.673: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.673: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 18:12:35.680283 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 18:12:35.773: INFO: Latency metrics for node latest-control-plane Mar 25 18:12:35.773: INFO: Logging node info for node latest-worker Mar 25 18:12:35.778: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1286335 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 18:01:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 18:10:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 18:10:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 18:10:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 18:10:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 18:10:09 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 18:12:35.779: INFO: Logging kubelet events for node latest-worker Mar 25 18:12:35.782: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 18:12:35.792: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.792: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 18:12:35.792: INFO: busybox-a43239ce-94b2-424c-8947-ff05503b6ad8 started at 2021-03-25 18:12:21 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.792: INFO: Container busybox ready: true, restart count 0 Mar 25 18:12:35.792: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.792: INFO: Container kindnet-cni ready: true, restart count 0 W0325 18:12:35.798719 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 18:12:35.918: INFO: Latency metrics for node latest-worker Mar 25 18:12:35.918: INFO: Logging node info for node latest-worker2 Mar 25 18:12:35.921: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1287209 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 18:10:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 18:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 18:11:20 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 18:11:20 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 18:11:20 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 18:11:20 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 18:12:35.922: INFO: Logging kubelet events for node latest-worker2 Mar 25 18:12:35.925: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 18:12:35.929: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.929: INFO: Container volume-tester ready: false, restart count 0 Mar 25 18:12:35.929: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.929: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 18:12:35.929: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 18:12:35.929: INFO: Container kindnet-cni ready: true, restart count 0 W0325 18:12:35.934591 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 18:12:36.045: INFO: Latency metrics for node latest-worker2 Mar 25 18:12:36.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-440" for this suite. • Failure [6.655 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 Mar 25 18:12:35.608: Unexpected error: <*errors.errorString | 0xc002a9ef80>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 ------------------------------ {"msg":"FAILED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":115,"completed":65,"skipped":4340,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:12:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 25 18:12:38.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a18993c6-b9d8-4163-8dfa-c281bcb0059d] Namespace:persistent-local-volumes-test-2894 PodName:hostexec-latest-worker-5dsph ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:12:38.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:12:38.321: INFO: Creating a PV followed by a PVC Mar 25 18:12:38.332: INFO: Waiting for PV local-pv6nhd8 to bind to PVC pvc-6kmrv Mar 25 18:12:38.332: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-6kmrv] to have phase Bound Mar 25 18:12:38.338: INFO: PersistentVolumeClaim pvc-6kmrv found but phase is Pending instead of Bound. Mar 25 18:12:40.342: INFO: PersistentVolumeClaim pvc-6kmrv found but phase is Pending instead of Bound. Mar 25 18:12:42.347: INFO: PersistentVolumeClaim pvc-6kmrv found but phase is Pending instead of Bound. Mar 25 18:12:44.353: INFO: PersistentVolumeClaim pvc-6kmrv found but phase is Pending instead of Bound. Mar 25 18:12:46.357: INFO: PersistentVolumeClaim pvc-6kmrv found but phase is Pending instead of Bound. Mar 25 18:12:48.361: INFO: PersistentVolumeClaim pvc-6kmrv found and phase=Bound (10.029107281s) Mar 25 18:12:48.361: INFO: Waiting up to 3m0s for PersistentVolume local-pv6nhd8 to have phase Bound Mar 25 18:12:48.364: INFO: PersistentVolume local-pv6nhd8 found and phase=Bound (2.678142ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 25 18:12:48.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-622c9f04-1b55-4bc0-9b07-013dcc492a61] Namespace:persistent-local-volumes-test-2894 PodName:hostexec-latest-worker-5dsph ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:12:48.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:12:48.497: INFO: Creating a PV followed by a PVC Mar 25 18:12:48.515: INFO: Waiting for PV local-pvrzbgg to bind to PVC pvc-qcnc4 Mar 25 18:12:48.515: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qcnc4] to have phase Bound Mar 25 18:12:48.534: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:12:50.539: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:12:52.546: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:12:54.549: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:12:56.556: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:12:58.560: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:13:00.565: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:13:02.587: INFO: PersistentVolumeClaim pvc-qcnc4 found but phase is Pending instead of Bound. Mar 25 18:13:04.591: INFO: PersistentVolumeClaim pvc-qcnc4 found and phase=Bound (16.076293293s) Mar 25 18:13:04.591: INFO: Waiting up to 3m0s for PersistentVolume local-pvrzbgg to have phase Bound Mar 25 18:13:04.594: INFO: PersistentVolume local-pvrzbgg found and phase=Bound (2.891067ms) Mar 25 18:13:04.605: INFO: Waiting up to 5m0s for pod "pod-443bbc63-a045-4d57-8e97-7573b769b090" in namespace "persistent-local-volumes-test-2894" to be "Unschedulable" Mar 25 18:13:04.612: INFO: Pod "pod-443bbc63-a045-4d57-8e97-7573b769b090": Phase="Pending", Reason="", readiness=false. Elapsed: 6.534477ms Mar 25 18:13:06.617: INFO: Pod "pod-443bbc63-a045-4d57-8e97-7573b769b090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011526036s Mar 25 18:13:06.617: INFO: Pod "pod-443bbc63-a045-4d57-8e97-7573b769b090" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 25 18:13:06.617: INFO: Deleting PersistentVolumeClaim "pvc-6kmrv" Mar 25 18:13:06.623: INFO: Deleting PersistentVolume "local-pv6nhd8" STEP: Removing the test directory Mar 25 18:13:06.646: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a18993c6-b9d8-4163-8dfa-c281bcb0059d] Namespace:persistent-local-volumes-test-2894 PodName:hostexec-latest-worker-5dsph ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:13:06.646: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:13:06.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2894" for this suite. • [SLOW TEST:30.725 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":115,"completed":66,"skipped":4348,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:13:06.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-4144 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:13:07.037: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-attacher Mar 25 18:13:07.041: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4144 Mar 25 18:13:07.041: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4144 Mar 25 18:13:07.057: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4144 Mar 25 18:13:07.082: INFO: creating *v1.Role: csi-mock-volumes-4144-6536/external-attacher-cfg-csi-mock-volumes-4144 Mar 25 18:13:07.108: INFO: creating *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-attacher-role-cfg Mar 25 18:13:07.170: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-provisioner Mar 25 18:13:07.178: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4144 Mar 25 18:13:07.178: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4144 Mar 25 18:13:07.190: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4144 Mar 25 18:13:07.195: INFO: creating *v1.Role: csi-mock-volumes-4144-6536/external-provisioner-cfg-csi-mock-volumes-4144 Mar 25 18:13:07.214: INFO: creating *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-provisioner-role-cfg Mar 25 18:13:07.225: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-resizer Mar 25 18:13:07.231: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4144 Mar 25 18:13:07.231: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4144 Mar 25 18:13:07.249: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4144 Mar 25 18:13:07.261: INFO: creating *v1.Role: csi-mock-volumes-4144-6536/external-resizer-cfg-csi-mock-volumes-4144 Mar 25 18:13:07.302: INFO: creating *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-resizer-role-cfg Mar 25 18:13:07.310: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-snapshotter Mar 25 18:13:07.327: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4144 Mar 25 18:13:07.327: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4144 Mar 25 18:13:07.345: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4144 Mar 25 18:13:07.377: INFO: creating *v1.Role: csi-mock-volumes-4144-6536/external-snapshotter-leaderelection-csi-mock-volumes-4144 Mar 25 18:13:07.400: INFO: creating *v1.RoleBinding: csi-mock-volumes-4144-6536/external-snapshotter-leaderelection Mar 25 18:13:07.446: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-mock Mar 25 18:13:07.453: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4144 Mar 25 18:13:07.459: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4144 Mar 25 18:13:07.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4144 Mar 25 18:13:07.471: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4144 Mar 25 18:13:07.489: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4144 Mar 25 18:13:07.538: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4144 Mar 25 18:13:07.571: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4144 Mar 25 18:13:07.575: INFO: creating *v1.StatefulSet: csi-mock-volumes-4144-6536/csi-mockplugin Mar 25 18:13:07.585: INFO: creating *v1.StatefulSet: csi-mock-volumes-4144-6536/csi-mockplugin-attacher Mar 25 18:13:07.604: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4144 to register on node latest-worker STEP: Creating pod Mar 25 18:13:17.215: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:13:17.221: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-cnt28] to have phase Bound Mar 25 18:13:17.226: INFO: PersistentVolumeClaim pvc-cnt28 found but phase is Pending instead of Bound. Mar 25 18:13:19.229: INFO: PersistentVolumeClaim pvc-cnt28 found and phase=Bound (2.007040444s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-p2fwv Mar 25 18:15:25.467: INFO: Deleting pod "pvc-volume-tester-p2fwv" in namespace "csi-mock-volumes-4144" Mar 25 18:15:25.472: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p2fwv" to be fully deleted STEP: Deleting claim pvc-cnt28 Mar 25 18:16:05.509: INFO: Waiting up to 2m0s for PersistentVolume pvc-9d16a92a-b728-4ba1-a12f-98b8b3c530f2 to get deleted Mar 25 18:16:05.518: INFO: PersistentVolume pvc-9d16a92a-b728-4ba1-a12f-98b8b3c530f2 found and phase=Bound (8.541173ms) Mar 25 18:16:07.673: INFO: PersistentVolume pvc-9d16a92a-b728-4ba1-a12f-98b8b3c530f2 was removed STEP: Deleting storageclass csi-mock-volumes-4144-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4144 STEP: Waiting for namespaces [csi-mock-volumes-4144] to vanish STEP: uninstalling csi mock driver Mar 25 18:16:13.727: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-attacher Mar 25 18:16:13.732: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4144 Mar 25 18:16:13.743: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4144 Mar 25 18:16:13.754: INFO: deleting *v1.Role: csi-mock-volumes-4144-6536/external-attacher-cfg-csi-mock-volumes-4144 Mar 25 18:16:13.779: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-attacher-role-cfg Mar 25 18:16:13.810: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-provisioner Mar 25 18:16:13.846: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4144 Mar 25 18:16:13.860: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4144 Mar 25 18:16:13.868: INFO: deleting *v1.Role: csi-mock-volumes-4144-6536/external-provisioner-cfg-csi-mock-volumes-4144 Mar 25 18:16:13.874: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-provisioner-role-cfg Mar 25 18:16:13.880: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-resizer Mar 25 18:16:13.886: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4144 Mar 25 18:16:13.945: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4144 Mar 25 18:16:13.952: INFO: deleting *v1.Role: csi-mock-volumes-4144-6536/external-resizer-cfg-csi-mock-volumes-4144 Mar 25 18:16:13.958: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4144-6536/csi-resizer-role-cfg Mar 25 18:16:13.964: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-snapshotter Mar 25 18:16:13.983: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4144 Mar 25 18:16:13.995: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4144 Mar 25 18:16:14.005: INFO: deleting *v1.Role: csi-mock-volumes-4144-6536/external-snapshotter-leaderelection-csi-mock-volumes-4144 Mar 25 18:16:14.012: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4144-6536/external-snapshotter-leaderelection Mar 25 18:16:14.018: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4144-6536/csi-mock Mar 25 18:16:14.025: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4144 Mar 25 18:16:14.071: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4144 Mar 25 18:16:14.097: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4144 Mar 25 18:16:14.108: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4144 Mar 25 18:16:14.114: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4144 Mar 25 18:16:14.119: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4144 Mar 25 18:16:14.126: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4144 Mar 25 18:16:14.131: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4144-6536/csi-mockplugin Mar 25 18:16:14.139: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4144-6536/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4144-6536 STEP: Waiting for namespaces [csi-mock-volumes-4144-6536] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:12.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:245.366 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":115,"completed":67,"skipped":4354,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:12.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-97e48748-24c2-4369-b825-c271f2c1064b STEP: Creating a pod to test consume configMaps Mar 25 18:17:12.314: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c" in namespace "projected-988" to be "Succeeded or Failed" Mar 25 18:17:12.334: INFO: Pod "pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.25962ms Mar 25 18:17:14.339: INFO: Pod "pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025089319s Mar 25 18:17:16.344: INFO: Pod "pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030195638s STEP: Saw pod success Mar 25 18:17:16.344: INFO: Pod "pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c" satisfied condition "Succeeded or Failed" Mar 25 18:17:16.347: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c container agnhost-container: STEP: delete the pod Mar 25 18:17:16.402: INFO: Waiting for pod pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c to disappear Mar 25 18:17:16.455: INFO: Pod pod-projected-configmaps-54b7670f-6edb-4f12-9b5d-c69a4e29511c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:16.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-988" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":115,"completed":68,"skipped":4354,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:16.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683" Mar 25 18:17:18.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683" "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683"] Namespace:persistent-local-volumes-test-6054 PodName:hostexec-latest-worker-wpn9m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:18.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:17:18.734: INFO: Creating a PV followed by a PVC Mar 25 18:17:18.743: INFO: Waiting for PV local-pv989wd to bind to PVC pvc-njlkb Mar 25 18:17:18.743: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-njlkb] to have phase Bound Mar 25 18:17:18.749: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:20.754: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:22.758: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:24.763: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:26.768: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:28.772: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:30.778: INFO: PersistentVolumeClaim pvc-njlkb found but phase is Pending instead of Bound. Mar 25 18:17:32.782: INFO: PersistentVolumeClaim pvc-njlkb found and phase=Bound (14.03872678s) Mar 25 18:17:32.782: INFO: Waiting up to 3m0s for PersistentVolume local-pv989wd to have phase Bound Mar 25 18:17:32.784: INFO: PersistentVolume local-pv989wd found and phase=Bound (2.243935ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 18:17:36.808: INFO: pod "pod-ac3c6c99-2d7b-4237-a695-93d827043c35" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:17:36.808: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6054 PodName:pod-ac3c6c99-2d7b-4237-a695-93d827043c35 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:36.808: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:36.947: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:17:36.947: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6054 PodName:pod-ac3c6c99-2d7b-4237-a695-93d827043c35 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:36.947: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:37.034: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 18:17:41.067: INFO: pod "pod-ad82b6c2-2aec-43a0-a05d-3bac251131bb" created on Node "latest-worker" Mar 25 18:17:41.067: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6054 PodName:pod-ad82b6c2-2aec-43a0-a05d-3bac251131bb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:41.067: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:41.187: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 18:17:41.187: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6054 PodName:pod-ad82b6c2-2aec-43a0-a05d-3bac251131bb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:41.187: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:41.290: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 18:17:41.290: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6054 PodName:pod-ac3c6c99-2d7b-4237-a695-93d827043c35 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:41.290: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:41.398: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ac3c6c99-2d7b-4237-a695-93d827043c35 in namespace persistent-local-volumes-test-6054 STEP: Deleting pod2 STEP: Deleting pod pod-ad82b6c2-2aec-43a0-a05d-3bac251131bb in namespace persistent-local-volumes-test-6054 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:17:41.444: INFO: Deleting PersistentVolumeClaim "pvc-njlkb" Mar 25 18:17:41.466: INFO: Deleting PersistentVolume "local-pv989wd" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683" Mar 25 18:17:41.505: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683"] Namespace:persistent-local-volumes-test-6054 PodName:hostexec-latest-worker-wpn9m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:41.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 18:17:41.640: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ffd74ef3-72f7-43b1-acb3-38800a16c683] Namespace:persistent-local-volumes-test-6054 PodName:hostexec-latest-worker-wpn9m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:41.640: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:41.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6054" for this suite. • [SLOW TEST:25.442 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":69,"skipped":4357,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv4 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:41.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 18:17:42.220: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:42.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-6347" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.348 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:42.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 18:17:42.365: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:42.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2993" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 18:17:42.375: INFO: AfterEach: Cleaning up test resources Mar 25 18:17:42.375: INFO: pvc is nil Mar 25 18:17:42.375: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.119 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:42.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930" Mar 25 18:17:46.732: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930 && dd if=/dev/zero of=/tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930/file] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-latest-worker2-bshsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:46.733: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:46.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-latest-worker2-bshsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:46.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:17:47.041: INFO: Creating a PV followed by a PVC Mar 25 18:17:47.440: INFO: Waiting for PV local-pvr79qw to bind to PVC pvc-4m78p Mar 25 18:17:47.440: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4m78p] to have phase Bound Mar 25 18:17:47.456: INFO: PersistentVolumeClaim pvc-4m78p found but phase is Pending instead of Bound. Mar 25 18:17:49.469: INFO: PersistentVolumeClaim pvc-4m78p found and phase=Bound (2.029178775s) Mar 25 18:17:49.469: INFO: Waiting up to 3m0s for PersistentVolume local-pvr79qw to have phase Bound Mar 25 18:17:49.473: INFO: PersistentVolume local-pvr79qw found and phase=Bound (3.310942ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:17:53.576: INFO: pod "pod-609145b8-6eeb-4594-8658-27e327a9bc15" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:17:53.576: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5189 PodName:pod-609145b8-6eeb-4594-8658-27e327a9bc15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:53.576: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:53.677: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:17:53.677: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5189 PodName:pod-609145b8-6eeb-4594-8658-27e327a9bc15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:53.678: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:53.778: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-609145b8-6eeb-4594-8658-27e327a9bc15 in namespace persistent-local-volumes-test-5189 STEP: Creating pod2 STEP: Creating a pod Mar 25 18:17:57.842: INFO: pod "pod-09d37b32-6e12-43b6-80f5-24554b9bb469" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 18:17:57.842: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5189 PodName:pod-09d37b32-6e12-43b6-80f5-24554b9bb469 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:17:57.842: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:17:57.974: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-09d37b32-6e12-43b6-80f5-24554b9bb469 in namespace persistent-local-volumes-test-5189 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:17:57.980: INFO: Deleting PersistentVolumeClaim "pvc-4m78p" Mar 25 18:17:57.999: INFO: Deleting PersistentVolume "local-pvr79qw" Mar 25 18:17:58.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-latest-worker2-bshsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:58.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930/file Mar 25 18:17:58.155: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-latest-worker2-bshsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:58.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930 Mar 25 18:17:58.280: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-983b513f-e6ca-4590-bf8e-60b5ed5c2930] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-latest-worker2-bshsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:17:58.280: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:17:58.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5189" for this suite. • [SLOW TEST:16.093 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":70,"skipped":4449,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:17:58.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:18:02.612: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b02d67c8-2def-4dd6-96af-d22ac92795f5 && mount --bind /tmp/local-volume-test-b02d67c8-2def-4dd6-96af-d22ac92795f5 /tmp/local-volume-test-b02d67c8-2def-4dd6-96af-d22ac92795f5] Namespace:persistent-local-volumes-test-5946 PodName:hostexec-latest-worker-4t6pg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:18:02.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:18:02.750: INFO: Creating a PV followed by a PVC Mar 25 18:18:02.763: INFO: Waiting for PV local-pvrgmw5 to bind to PVC pvc-q7zd7 Mar 25 18:18:02.763: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-q7zd7] to have phase Bound Mar 25 18:18:02.769: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:04.774: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:06.779: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:08.785: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:10.789: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:12.794: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:14.799: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:16.805: INFO: PersistentVolumeClaim pvc-q7zd7 found but phase is Pending instead of Bound. Mar 25 18:18:18.810: INFO: PersistentVolumeClaim pvc-q7zd7 found and phase=Bound (16.046695552s) Mar 25 18:18:18.810: INFO: Waiting up to 3m0s for PersistentVolume local-pvrgmw5 to have phase Bound Mar 25 18:18:18.812: INFO: PersistentVolume local-pvrgmw5 found and phase=Bound (2.441347ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 18:18:18.817: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:18:18.818: INFO: Deleting PersistentVolumeClaim "pvc-q7zd7" Mar 25 18:18:18.822: INFO: Deleting PersistentVolume "local-pvrgmw5" STEP: Removing the test directory Mar 25 18:18:18.852: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-b02d67c8-2def-4dd6-96af-d22ac92795f5 && rm -r /tmp/local-volume-test-b02d67c8-2def-4dd6-96af-d22ac92795f5] Namespace:persistent-local-volumes-test-5946 PodName:hostexec-latest-worker-4t6pg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:18:18.852: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:18:19.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5946" for this suite. S [SKIPPING] [20.545 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:18:19.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 25 18:18:19.097: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:18:19.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-8918" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.137 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should only be allowed to provision PDs in zones where nodes exist [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:18:19.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-9934 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:18:19.427: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-attacher Mar 25 18:18:19.475: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9934 Mar 25 18:18:19.475: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9934 Mar 25 18:18:19.487: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9934 Mar 25 18:18:19.493: INFO: creating *v1.Role: csi-mock-volumes-9934-427/external-attacher-cfg-csi-mock-volumes-9934 Mar 25 18:18:19.499: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-427/csi-attacher-role-cfg Mar 25 18:18:19.516: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-provisioner Mar 25 18:18:19.530: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9934 Mar 25 18:18:19.530: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9934 Mar 25 18:18:19.535: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9934 Mar 25 18:18:19.569: INFO: creating *v1.Role: csi-mock-volumes-9934-427/external-provisioner-cfg-csi-mock-volumes-9934 Mar 25 18:18:19.613: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-427/csi-provisioner-role-cfg Mar 25 18:18:19.629: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-resizer Mar 25 18:18:19.643: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9934 Mar 25 18:18:19.643: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9934 Mar 25 18:18:19.649: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9934 Mar 25 18:18:19.655: INFO: creating *v1.Role: csi-mock-volumes-9934-427/external-resizer-cfg-csi-mock-volumes-9934 Mar 25 18:18:19.661: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-427/csi-resizer-role-cfg Mar 25 18:18:19.677: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-snapshotter Mar 25 18:18:19.702: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9934 Mar 25 18:18:19.702: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9934 Mar 25 18:18:19.780: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9934 Mar 25 18:18:19.784: INFO: creating *v1.Role: csi-mock-volumes-9934-427/external-snapshotter-leaderelection-csi-mock-volumes-9934 Mar 25 18:18:19.793: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-427/external-snapshotter-leaderelection Mar 25 18:18:19.816: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-mock Mar 25 18:18:19.829: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9934 Mar 25 18:18:19.851: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9934 Mar 25 18:18:19.867: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9934 Mar 25 18:18:19.906: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9934 Mar 25 18:18:19.909: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9934 Mar 25 18:18:19.912: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9934 Mar 25 18:18:19.918: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9934 Mar 25 18:18:19.925: INFO: creating *v1.StatefulSet: csi-mock-volumes-9934-427/csi-mockplugin Mar 25 18:18:19.954: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9934 Mar 25 18:18:19.978: INFO: creating *v1.StatefulSet: csi-mock-volumes-9934-427/csi-mockplugin-resizer Mar 25 18:18:20.002: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9934" Mar 25 18:18:20.044: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9934 to register on node latest-worker2 STEP: Creating pod Mar 25 18:18:29.883: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:18:29.924: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-kx9db] to have phase Bound Mar 25 18:18:29.927: INFO: PersistentVolumeClaim pvc-kx9db found but phase is Pending instead of Bound. Mar 25 18:18:31.931: INFO: PersistentVolumeClaim pvc-kx9db found and phase=Bound (2.006993394s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-fxbms Mar 25 18:18:37.975: INFO: Deleting pod "pvc-volume-tester-fxbms" in namespace "csi-mock-volumes-9934" Mar 25 18:18:37.981: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fxbms" to be fully deleted STEP: Deleting claim pvc-kx9db Mar 25 18:18:46.112: INFO: Waiting up to 2m0s for PersistentVolume pvc-729b4b8f-49a0-4f1b-8687-3dcf9e8f75e6 to get deleted Mar 25 18:18:46.272: INFO: PersistentVolume pvc-729b4b8f-49a0-4f1b-8687-3dcf9e8f75e6 found and phase=Bound (160.065791ms) Mar 25 18:18:48.275: INFO: PersistentVolume pvc-729b4b8f-49a0-4f1b-8687-3dcf9e8f75e6 was removed STEP: Deleting storageclass csi-mock-volumes-9934-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9934 STEP: Waiting for namespaces [csi-mock-volumes-9934] to vanish STEP: uninstalling csi mock driver Mar 25 18:18:54.328: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-attacher Mar 25 18:18:54.334: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9934 Mar 25 18:18:54.342: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9934 Mar 25 18:18:54.353: INFO: deleting *v1.Role: csi-mock-volumes-9934-427/external-attacher-cfg-csi-mock-volumes-9934 Mar 25 18:18:54.376: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-427/csi-attacher-role-cfg Mar 25 18:18:54.389: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-provisioner Mar 25 18:18:54.430: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9934 Mar 25 18:18:54.438: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9934 Mar 25 18:18:54.455: INFO: deleting *v1.Role: csi-mock-volumes-9934-427/external-provisioner-cfg-csi-mock-volumes-9934 Mar 25 18:18:54.462: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-427/csi-provisioner-role-cfg Mar 25 18:18:54.468: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-resizer Mar 25 18:18:54.473: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9934 Mar 25 18:18:54.479: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9934 Mar 25 18:18:54.515: INFO: deleting *v1.Role: csi-mock-volumes-9934-427/external-resizer-cfg-csi-mock-volumes-9934 Mar 25 18:18:54.541: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-427/csi-resizer-role-cfg Mar 25 18:18:54.545: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-snapshotter Mar 25 18:18:54.551: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9934 Mar 25 18:18:54.562: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9934 Mar 25 18:18:54.568: INFO: deleting *v1.Role: csi-mock-volumes-9934-427/external-snapshotter-leaderelection-csi-mock-volumes-9934 Mar 25 18:18:54.574: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-427/external-snapshotter-leaderelection Mar 25 18:18:54.581: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-427/csi-mock Mar 25 18:18:54.610: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9934 Mar 25 18:18:54.636: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9934 Mar 25 18:18:54.656: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9934 Mar 25 18:18:54.665: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9934 Mar 25 18:18:54.671: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9934 Mar 25 18:18:54.677: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9934 Mar 25 18:18:54.683: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9934 Mar 25 18:18:54.689: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9934-427/csi-mockplugin Mar 25 18:18:54.695: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9934 Mar 25 18:18:54.701: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9934-427/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-9934-427 STEP: Waiting for namespaces [csi-mock-volumes-9934-427] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:19:38.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:79.592 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":115,"completed":71,"skipped":4514,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:19:38.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69" Mar 25 18:19:42.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69 && dd if=/dev/zero of=/tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69/file] Namespace:persistent-local-volumes-test-9660 PodName:hostexec-latest-worker2-4sbf5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:42.887: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:19:43.055: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9660 PodName:hostexec-latest-worker2-4sbf5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:43.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:19:43.164: INFO: Creating a PV followed by a PVC Mar 25 18:19:43.174: INFO: Waiting for PV local-pvr85ks to bind to PVC pvc-j58fs Mar 25 18:19:43.174: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-j58fs] to have phase Bound Mar 25 18:19:43.224: INFO: PersistentVolumeClaim pvc-j58fs found but phase is Pending instead of Bound. Mar 25 18:19:45.229: INFO: PersistentVolumeClaim pvc-j58fs found and phase=Bound (2.054879231s) Mar 25 18:19:45.229: INFO: Waiting up to 3m0s for PersistentVolume local-pvr85ks to have phase Bound Mar 25 18:19:45.231: INFO: PersistentVolume local-pvr85ks found and phase=Bound (2.23394ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Mar 25 18:19:45.235: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:19:45.236: INFO: Deleting PersistentVolumeClaim "pvc-j58fs" Mar 25 18:19:45.240: INFO: Deleting PersistentVolume "local-pvr85ks" Mar 25 18:19:46.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9660 PodName:hostexec-latest-worker2-4sbf5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:46.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69/file Mar 25 18:19:46.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9660 PodName:hostexec-latest-worker2-4sbf5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:46.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69 Mar 25 18:19:46.471: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-896b7a52-dc58-40e4-816f-7a6c797afa69] Namespace:persistent-local-volumes-test-9660 PodName:hostexec-latest-worker2-4sbf5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:46.471: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:19:46.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9660" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [7.828 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:19:46.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008" Mar 25 18:19:51.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008 && dd if=/dev/zero of=/tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008/file] Namespace:persistent-local-volumes-test-3164 PodName:hostexec-latest-worker-qghnn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:51.743: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:19:51.899: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3164 PodName:hostexec-latest-worker-qghnn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:19:51.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:19:51.986: INFO: Creating a PV followed by a PVC Mar 25 18:19:52.001: INFO: Waiting for PV local-pvgmgft to bind to PVC pvc-qzjs9 Mar 25 18:19:52.001: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qzjs9] to have phase Bound Mar 25 18:19:52.030: INFO: PersistentVolumeClaim pvc-qzjs9 found but phase is Pending instead of Bound. Mar 25 18:19:54.035: INFO: PersistentVolumeClaim pvc-qzjs9 found and phase=Bound (2.034519332s) Mar 25 18:19:54.035: INFO: Waiting up to 3m0s for PersistentVolume local-pvgmgft to have phase Bound Mar 25 18:19:54.039: INFO: PersistentVolume local-pvgmgft found and phase=Bound (3.277736ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 18:20:00.094: INFO: pod "pod-2e36f87e-aa64-4e22-b7c6-1b92cde3d815" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:20:00.094: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3164 PodName:pod-2e36f87e-aa64-4e22-b7c6-1b92cde3d815 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:20:00.094: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:20:00.221: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000067 seconds, 262.4KB/s", err: Mar 25 18:20:00.221: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3164 PodName:pod-2e36f87e-aa64-4e22-b7c6-1b92cde3d815 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:20:00.221: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:20:00.326: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 18:20:04.404: INFO: pod "pod-b9f613ba-4318-4c8b-b1aa-c108bb5b113c" created on Node "latest-worker" Mar 25 18:20:04.404: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3164 PodName:pod-b9f613ba-4318-4c8b-b1aa-c108bb5b113c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:20:04.404: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:20:04.517: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Mar 25 18:20:04.517: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3164 PodName:pod-b9f613ba-4318-4c8b-b1aa-c108bb5b113c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:20:04.517: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:20:04.611: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000053 seconds, 202.7KB/s", err: STEP: Reading in pod1 Mar 25 18:20:04.611: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3164 PodName:pod-2e36f87e-aa64-4e22-b7c6-1b92cde3d815 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:20:04.611: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:20:04.719: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-2e36f87e-aa64-4e22-b7c6-1b92cde3d815 in namespace persistent-local-volumes-test-3164 STEP: Deleting pod2 STEP: Deleting pod pod-b9f613ba-4318-4c8b-b1aa-c108bb5b113c in namespace persistent-local-volumes-test-3164 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:20:04.753: INFO: Deleting PersistentVolumeClaim "pvc-qzjs9" Mar 25 18:20:04.774: INFO: Deleting PersistentVolume "local-pvgmgft" Mar 25 18:20:04.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3164 PodName:hostexec-latest-worker-qghnn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:20:04.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008/file Mar 25 18:20:04.889: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3164 PodName:hostexec-latest-worker-qghnn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:20:04.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008 Mar 25 18:20:04.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b1b7d8fa-fba7-49d3-9173-d38d9c963008] Namespace:persistent-local-volumes-test-3164 PodName:hostexec-latest-worker-qghnn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:20:04.989: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:20:05.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3164" for this suite. • [SLOW TEST:18.745 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":72,"skipped":4636,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:20:05.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 18:20:35.828: INFO: Deleting pod "pv-362"/"pod-ephm-test-projected-kbsh" Mar 25 18:20:35.828: INFO: Deleting pod "pod-ephm-test-projected-kbsh" in namespace "pv-362" Mar 25 18:20:35.835: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-kbsh" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:20:45.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-362" for this suite. • [SLOW TEST:40.538 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":115,"completed":73,"skipped":4640,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:20:45.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-1747 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:20:46.070: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-attacher Mar 25 18:20:46.103: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1747 Mar 25 18:20:46.103: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1747 Mar 25 18:20:46.117: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1747 Mar 25 18:20:46.123: INFO: creating *v1.Role: csi-mock-volumes-1747-4939/external-attacher-cfg-csi-mock-volumes-1747 Mar 25 18:20:46.129: INFO: creating *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-attacher-role-cfg Mar 25 18:20:46.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-provisioner Mar 25 18:20:46.183: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1747 Mar 25 18:20:46.183: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1747 Mar 25 18:20:46.189: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1747 Mar 25 18:20:46.205: INFO: creating *v1.Role: csi-mock-volumes-1747-4939/external-provisioner-cfg-csi-mock-volumes-1747 Mar 25 18:20:46.239: INFO: creating *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-provisioner-role-cfg Mar 25 18:20:46.255: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-resizer Mar 25 18:20:46.262: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1747 Mar 25 18:20:46.262: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1747 Mar 25 18:20:46.267: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1747 Mar 25 18:20:46.298: INFO: creating *v1.Role: csi-mock-volumes-1747-4939/external-resizer-cfg-csi-mock-volumes-1747 Mar 25 18:20:46.311: INFO: creating *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-resizer-role-cfg Mar 25 18:20:46.326: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-snapshotter Mar 25 18:20:46.355: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1747 Mar 25 18:20:46.355: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1747 Mar 25 18:20:46.380: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1747 Mar 25 18:20:46.386: INFO: creating *v1.Role: csi-mock-volumes-1747-4939/external-snapshotter-leaderelection-csi-mock-volumes-1747 Mar 25 18:20:46.392: INFO: creating *v1.RoleBinding: csi-mock-volumes-1747-4939/external-snapshotter-leaderelection Mar 25 18:20:46.451: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-mock Mar 25 18:20:46.464: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1747 Mar 25 18:20:46.470: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1747 Mar 25 18:20:46.497: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1747 Mar 25 18:20:46.512: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1747 Mar 25 18:20:46.555: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1747 Mar 25 18:20:46.560: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1747 Mar 25 18:20:46.566: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1747 Mar 25 18:20:46.571: INFO: creating *v1.StatefulSet: csi-mock-volumes-1747-4939/csi-mockplugin Mar 25 18:20:46.593: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1747 Mar 25 18:20:46.631: INFO: creating *v1.StatefulSet: csi-mock-volumes-1747-4939/csi-mockplugin-attacher Mar 25 18:20:46.679: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1747" Mar 25 18:20:46.692: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1747 to register on node latest-worker2 STEP: Creating pod Mar 25 18:21:01.327: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 25 18:21:23.510: INFO: Deleting pod "pvc-volume-tester-l8l2m" in namespace "csi-mock-volumes-1747" Mar 25 18:21:23.516: INFO: Wait up to 5m0s for pod "pvc-volume-tester-l8l2m" to be fully deleted STEP: Deleting pod pvc-volume-tester-l8l2m Mar 25 18:22:25.525: INFO: Deleting pod "pvc-volume-tester-l8l2m" in namespace "csi-mock-volumes-1747" STEP: Deleting claim pvc-tsvq2 Mar 25 18:22:25.535: INFO: Waiting up to 2m0s for PersistentVolume pvc-a1ca07d8-3916-4106-81bb-6c230436684e to get deleted Mar 25 18:22:25.575: INFO: PersistentVolume pvc-a1ca07d8-3916-4106-81bb-6c230436684e found and phase=Bound (40.364344ms) Mar 25 18:22:27.579: INFO: PersistentVolume pvc-a1ca07d8-3916-4106-81bb-6c230436684e was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-1747 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1747 STEP: Waiting for namespaces [csi-mock-volumes-1747] to vanish STEP: uninstalling csi mock driver Mar 25 18:22:33.596: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-attacher Mar 25 18:22:33.604: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1747 Mar 25 18:22:33.618: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1747 Mar 25 18:22:33.627: INFO: deleting *v1.Role: csi-mock-volumes-1747-4939/external-attacher-cfg-csi-mock-volumes-1747 Mar 25 18:22:33.632: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-attacher-role-cfg Mar 25 18:22:33.637: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-provisioner Mar 25 18:22:33.685: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1747 Mar 25 18:22:33.692: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1747 Mar 25 18:22:33.704: INFO: deleting *v1.Role: csi-mock-volumes-1747-4939/external-provisioner-cfg-csi-mock-volumes-1747 Mar 25 18:22:33.710: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-provisioner-role-cfg Mar 25 18:22:33.716: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-resizer Mar 25 18:22:33.722: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1747 Mar 25 18:22:33.732: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1747 Mar 25 18:22:33.739: INFO: deleting *v1.Role: csi-mock-volumes-1747-4939/external-resizer-cfg-csi-mock-volumes-1747 Mar 25 18:22:33.745: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1747-4939/csi-resizer-role-cfg Mar 25 18:22:33.764: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-snapshotter Mar 25 18:22:33.776: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1747 Mar 25 18:22:33.802: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1747 Mar 25 18:22:33.808: INFO: deleting *v1.Role: csi-mock-volumes-1747-4939/external-snapshotter-leaderelection-csi-mock-volumes-1747 Mar 25 18:22:33.812: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1747-4939/external-snapshotter-leaderelection Mar 25 18:22:33.818: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1747-4939/csi-mock Mar 25 18:22:33.827: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1747 Mar 25 18:22:33.833: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1747 Mar 25 18:22:33.841: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1747 Mar 25 18:22:33.878: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1747 Mar 25 18:22:33.894: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1747 Mar 25 18:22:33.900: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1747 Mar 25 18:22:33.924: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1747 Mar 25 18:22:33.936: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1747-4939/csi-mockplugin Mar 25 18:22:33.955: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1747 Mar 25 18:22:33.960: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1747-4939/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1747-4939 STEP: Waiting for namespaces [csi-mock-volumes-1747-4939] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:23:29.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:164.123 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":115,"completed":74,"skipped":4692,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:23:29.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5" Mar 25 18:23:32.117: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5 && dd if=/dev/zero of=/tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5/file] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:32.117: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:23:32.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:32.285: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:23:32.404: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5 && chmod o+rwx /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:32.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:23:32.816: INFO: Creating a PV followed by a PVC Mar 25 18:23:32.830: INFO: Waiting for PV local-pvgvtkv to bind to PVC pvc-dvtn4 Mar 25 18:23:32.830: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-dvtn4] to have phase Bound Mar 25 18:23:32.850: INFO: PersistentVolumeClaim pvc-dvtn4 found but phase is Pending instead of Bound. Mar 25 18:23:34.855: INFO: PersistentVolumeClaim pvc-dvtn4 found and phase=Bound (2.024796014s) Mar 25 18:23:34.855: INFO: Waiting up to 3m0s for PersistentVolume local-pvgvtkv to have phase Bound Mar 25 18:23:34.858: INFO: PersistentVolume local-pvgvtkv found and phase=Bound (3.253393ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 18:23:34.864: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:23:34.866: INFO: Deleting PersistentVolumeClaim "pvc-dvtn4" Mar 25 18:23:34.871: INFO: Deleting PersistentVolume "local-pvgvtkv" Mar 25 18:23:34.895: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:34.895: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:23:35.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:35.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5/file Mar 25 18:23:35.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:35.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5 Mar 25 18:23:35.301: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1f104ea3-9d99-4eb8-a3ae-d3c9cc560fe5] Namespace:persistent-local-volumes-test-9026 PodName:hostexec-latest-worker2-cxfgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:35.301: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:23:35.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9026" for this suite. S [SKIPPING] [5.439 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:23:35.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-2e719b69-c54f-4f66-9479-bf428d7c9d13 STEP: Creating a pod to test consume secrets Mar 25 18:23:35.683: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263" in namespace "projected-8950" to be "Succeeded or Failed" Mar 25 18:23:35.706: INFO: Pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263": Phase="Pending", Reason="", readiness=false. Elapsed: 22.974173ms Mar 25 18:23:37.757: INFO: Pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074650563s Mar 25 18:23:39.762: INFO: Pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263": Phase="Running", Reason="", readiness=true. Elapsed: 4.079649761s Mar 25 18:23:41.767: INFO: Pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08414141s STEP: Saw pod success Mar 25 18:23:41.767: INFO: Pod "pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263" satisfied condition "Succeeded or Failed" Mar 25 18:23:41.771: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263 container projected-secret-volume-test: STEP: delete the pod Mar 25 18:23:41.795: INFO: Waiting for pod pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263 to disappear Mar 25 18:23:41.798: INFO: Pod pod-projected-secrets-32507ecb-f21f-4011-b752-bec83937b263 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:23:41.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8950" for this suite. STEP: Destroying namespace "secret-namespace-2089" for this suite. • [SLOW TEST:6.405 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":115,"completed":75,"skipped":4761,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes GlusterFS should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:23:41.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 18:23:41.909: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:23:41.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8313" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.121 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:23:41.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:23:46.083: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-53172475-c083-4653-9abb-69762a302d84 && mount --bind /tmp/local-volume-test-53172475-c083-4653-9abb-69762a302d84 /tmp/local-volume-test-53172475-c083-4653-9abb-69762a302d84] Namespace:persistent-local-volumes-test-4568 PodName:hostexec-latest-worker2-mtksh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:46.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:23:46.217: INFO: Creating a PV followed by a PVC Mar 25 18:23:46.232: INFO: Waiting for PV local-pvcdpr4 to bind to PVC pvc-5zg6n Mar 25 18:23:46.232: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-5zg6n] to have phase Bound Mar 25 18:23:46.250: INFO: PersistentVolumeClaim pvc-5zg6n found but phase is Pending instead of Bound. Mar 25 18:23:48.255: INFO: PersistentVolumeClaim pvc-5zg6n found and phase=Bound (2.02262662s) Mar 25 18:23:48.255: INFO: Waiting up to 3m0s for PersistentVolume local-pvcdpr4 to have phase Bound Mar 25 18:23:48.258: INFO: PersistentVolume local-pvcdpr4 found and phase=Bound (3.046663ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:23:52.323: INFO: pod "pod-c445f16e-6811-4294-a960-b31a50174383" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:23:52.324: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4568 PodName:pod-c445f16e-6811-4294-a960-b31a50174383 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:23:52.324: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:23:52.449: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 18:23:52.449: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4568 PodName:pod-c445f16e-6811-4294-a960-b31a50174383 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:23:52.449: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:23:52.547: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-c445f16e-6811-4294-a960-b31a50174383 in namespace persistent-local-volumes-test-4568 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:23:52.552: INFO: Deleting PersistentVolumeClaim "pvc-5zg6n" Mar 25 18:23:52.581: INFO: Deleting PersistentVolume "local-pvcdpr4" STEP: Removing the test directory Mar 25 18:23:52.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-53172475-c083-4653-9abb-69762a302d84 && rm -r /tmp/local-volume-test-53172475-c083-4653-9abb-69762a302d84] Namespace:persistent-local-volumes-test-4568 PodName:hostexec-latest-worker2-mtksh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:23:52.644: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:23:52.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4568" for this suite. • [SLOW TEST:10.827 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":115,"completed":76,"skipped":4803,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:23:52.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8510 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:23:52.950: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-attacher Mar 25 18:23:52.953: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8510 Mar 25 18:23:52.953: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8510 Mar 25 18:23:52.960: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8510 Mar 25 18:23:53.014: INFO: creating *v1.Role: csi-mock-volumes-8510-2493/external-attacher-cfg-csi-mock-volumes-8510 Mar 25 18:23:53.026: INFO: creating *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-attacher-role-cfg Mar 25 18:23:53.038: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-provisioner Mar 25 18:23:53.044: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8510 Mar 25 18:23:53.044: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8510 Mar 25 18:23:53.050: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8510 Mar 25 18:23:53.056: INFO: creating *v1.Role: csi-mock-volumes-8510-2493/external-provisioner-cfg-csi-mock-volumes-8510 Mar 25 18:23:53.078: INFO: creating *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-provisioner-role-cfg Mar 25 18:23:53.092: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-resizer Mar 25 18:23:53.107: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8510 Mar 25 18:23:53.107: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8510 Mar 25 18:23:53.163: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8510 Mar 25 18:23:53.167: INFO: creating *v1.Role: csi-mock-volumes-8510-2493/external-resizer-cfg-csi-mock-volumes-8510 Mar 25 18:23:53.182: INFO: creating *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-resizer-role-cfg Mar 25 18:23:53.218: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-snapshotter Mar 25 18:23:53.230: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8510 Mar 25 18:23:53.230: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8510 Mar 25 18:23:53.248: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8510 Mar 25 18:23:53.260: INFO: creating *v1.Role: csi-mock-volumes-8510-2493/external-snapshotter-leaderelection-csi-mock-volumes-8510 Mar 25 18:23:53.301: INFO: creating *v1.RoleBinding: csi-mock-volumes-8510-2493/external-snapshotter-leaderelection Mar 25 18:23:53.308: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-mock Mar 25 18:23:53.314: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8510 Mar 25 18:23:53.319: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8510 Mar 25 18:23:53.326: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8510 Mar 25 18:23:53.345: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8510 Mar 25 18:23:53.371: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8510 Mar 25 18:23:53.386: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8510 Mar 25 18:23:53.392: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8510 Mar 25 18:23:53.398: INFO: creating *v1.StatefulSet: csi-mock-volumes-8510-2493/csi-mockplugin Mar 25 18:23:53.427: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8510 Mar 25 18:23:53.443: INFO: creating *v1.StatefulSet: csi-mock-volumes-8510-2493/csi-mockplugin-attacher Mar 25 18:23:53.483: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8510" Mar 25 18:23:53.595: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8510 to register on node latest-worker Mar 25 18:24:03.463: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-8510 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0045fab40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00331cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8510 STEP: Waiting for namespaces [csi-mock-volumes-8510] to vanish STEP: uninstalling csi mock driver Mar 25 18:24:09.475: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-attacher Mar 25 18:24:09.486: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8510 Mar 25 18:24:09.493: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8510 Mar 25 18:24:09.503: INFO: deleting *v1.Role: csi-mock-volumes-8510-2493/external-attacher-cfg-csi-mock-volumes-8510 Mar 25 18:24:09.509: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-attacher-role-cfg Mar 25 18:24:09.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-provisioner Mar 25 18:24:09.521: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8510 Mar 25 18:24:09.572: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8510 Mar 25 18:24:09.610: INFO: deleting *v1.Role: csi-mock-volumes-8510-2493/external-provisioner-cfg-csi-mock-volumes-8510 Mar 25 18:24:09.630: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-provisioner-role-cfg Mar 25 18:24:09.635: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-resizer Mar 25 18:24:09.641: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8510 Mar 25 18:24:09.653: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8510 Mar 25 18:24:09.696: INFO: deleting *v1.Role: csi-mock-volumes-8510-2493/external-resizer-cfg-csi-mock-volumes-8510 Mar 25 18:24:09.704: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8510-2493/csi-resizer-role-cfg Mar 25 18:24:10.404: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-snapshotter Mar 25 18:24:10.491: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8510 Mar 25 18:24:10.610: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8510 Mar 25 18:24:10.818: INFO: deleting *v1.Role: csi-mock-volumes-8510-2493/external-snapshotter-leaderelection-csi-mock-volumes-8510 Mar 25 18:24:10.834: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8510-2493/external-snapshotter-leaderelection Mar 25 18:24:10.851: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8510-2493/csi-mock Mar 25 18:24:10.894: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8510 Mar 25 18:24:11.136: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8510 Mar 25 18:24:11.425: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8510 Mar 25 18:24:11.445: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8510 Mar 25 18:24:11.468: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8510 Mar 25 18:24:11.572: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8510 Mar 25 18:24:11.575: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8510 Mar 25 18:24:11.581: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8510-2493/csi-mockplugin Mar 25 18:24:11.588: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8510 Mar 25 18:24:11.596: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8510-2493/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8510-2493 STEP: Waiting for namespaces [csi-mock-volumes-8510-2493] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:25:07.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [74.903 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 18:24:03.463: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-8510 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0045fab40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":115,"completed":76,"skipped":4842,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:25:07.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435" Mar 25 18:25:11.849: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435 && dd if=/dev/zero of=/tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435/file] Namespace:persistent-local-volumes-test-1650 PodName:hostexec-latest-worker2-9b685 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:11.849: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:12.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1650 PodName:hostexec-latest-worker2-9b685 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:12.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:25:12.118: INFO: Creating a PV followed by a PVC Mar 25 18:25:12.150: INFO: Waiting for PV local-pvrcmch to bind to PVC pvc-t59p2 Mar 25 18:25:12.150: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-t59p2] to have phase Bound Mar 25 18:25:12.194: INFO: PersistentVolumeClaim pvc-t59p2 found but phase is Pending instead of Bound. Mar 25 18:25:14.200: INFO: PersistentVolumeClaim pvc-t59p2 found and phase=Bound (2.050300806s) Mar 25 18:25:14.200: INFO: Waiting up to 3m0s for PersistentVolume local-pvrcmch to have phase Bound Mar 25 18:25:14.204: INFO: PersistentVolume local-pvrcmch found and phase=Bound (3.376074ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 18:25:18.272: INFO: pod "pod-ce4c8c0a-7a6a-4f5b-92bc-e2d79493110a" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:25:18.272: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1650 PodName:pod-ce4c8c0a-7a6a-4f5b-92bc-e2d79493110a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:18.272: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:18.381: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:25:18.381: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1650 PodName:pod-ce4c8c0a-7a6a-4f5b-92bc-e2d79493110a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:18.381: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:18.505: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 18:25:22.541: INFO: pod "pod-75123b07-b37f-40e0-86c0-7e225d21d83b" created on Node "latest-worker2" Mar 25 18:25:22.541: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1650 PodName:pod-75123b07-b37f-40e0-86c0-7e225d21d83b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:22.541: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:22.686: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 18:25:22.686: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1650 PodName:pod-75123b07-b37f-40e0-86c0-7e225d21d83b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:22.686: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:22.777: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 18:25:22.777: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1650 PodName:pod-ce4c8c0a-7a6a-4f5b-92bc-e2d79493110a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:22.777: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:22.871: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ce4c8c0a-7a6a-4f5b-92bc-e2d79493110a in namespace persistent-local-volumes-test-1650 STEP: Deleting pod2 STEP: Deleting pod pod-75123b07-b37f-40e0-86c0-7e225d21d83b in namespace persistent-local-volumes-test-1650 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:25:22.913: INFO: Deleting PersistentVolumeClaim "pvc-t59p2" Mar 25 18:25:22.918: INFO: Deleting PersistentVolume "local-pvrcmch" Mar 25 18:25:22.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1650 PodName:hostexec-latest-worker2-9b685 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:22.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435/file Mar 25 18:25:23.082: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1650 PodName:hostexec-latest-worker2-9b685 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:23.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435 Mar 25 18:25:23.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17dd20a1-616b-4b37-941b-aaedcb7e1435] Namespace:persistent-local-volumes-test-1650 PodName:hostexec-latest-worker2-9b685 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:23.184: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:25:23.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1650" for this suite. • [SLOW TEST:15.637 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":77,"skipped":4844,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:25:23.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:25:27.462: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f-backend && ln -s /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f-backend /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f] Namespace:persistent-local-volumes-test-4220 PodName:hostexec-latest-worker2-t9cf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:27.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:25:27.854: INFO: Creating a PV followed by a PVC Mar 25 18:25:27.974: INFO: Waiting for PV local-pvwfnpk to bind to PVC pvc-rbttx Mar 25 18:25:27.974: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rbttx] to have phase Bound Mar 25 18:25:28.002: INFO: PersistentVolumeClaim pvc-rbttx found but phase is Pending instead of Bound. Mar 25 18:25:30.008: INFO: PersistentVolumeClaim pvc-rbttx found and phase=Bound (2.034035604s) Mar 25 18:25:30.008: INFO: Waiting up to 3m0s for PersistentVolume local-pvwfnpk to have phase Bound Mar 25 18:25:30.010: INFO: PersistentVolume local-pvwfnpk found and phase=Bound (2.381445ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 18:25:34.037: INFO: pod "pod-30798937-8766-48c9-b910-504ce985a9c4" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:25:34.037: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4220 PodName:pod-30798937-8766-48c9-b910-504ce985a9c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:34.037: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:34.150: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:25:34.150: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4220 PodName:pod-30798937-8766-48c9-b910-504ce985a9c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:34.150: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:34.255: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 18:25:38.296: INFO: pod "pod-d120fb65-0e5b-4216-93c6-c969da429cb5" created on Node "latest-worker2" Mar 25 18:25:38.296: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4220 PodName:pod-d120fb65-0e5b-4216-93c6-c969da429cb5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:38.296: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:38.435: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 18:25:38.435: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4220 PodName:pod-d120fb65-0e5b-4216-93c6-c969da429cb5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:38.435: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:38.531: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 18:25:38.531: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4220 PodName:pod-30798937-8766-48c9-b910-504ce985a9c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:25:38.531: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:25:38.634: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-30798937-8766-48c9-b910-504ce985a9c4 in namespace persistent-local-volumes-test-4220 STEP: Deleting pod2 STEP: Deleting pod pod-d120fb65-0e5b-4216-93c6-c969da429cb5 in namespace persistent-local-volumes-test-4220 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:25:38.673: INFO: Deleting PersistentVolumeClaim "pvc-rbttx" Mar 25 18:25:38.692: INFO: Deleting PersistentVolume "local-pvwfnpk" STEP: Removing the test directory Mar 25 18:25:38.708: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f && rm -r /tmp/local-volume-test-430aff82-15e3-412a-a1de-98c798295c9f-backend] Namespace:persistent-local-volumes-test-4220 PodName:hostexec-latest-worker2-t9cf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:25:38.708: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:25:38.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4220" for this suite. • [SLOW TEST:15.549 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":115,"completed":78,"skipped":4860,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:25:38.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 18:25:39.211: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 18:25:39.217: INFO: Default storage class: "standard" Mar 25 18:25:39.217: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 18:25:49.361: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionvwqrq] to have phase Bound Mar 25 18:25:49.364: INFO: PersistentVolumeClaim pvc-protectionvwqrq found and phase=Bound (2.488823ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Deleting the pod that uses the PVC Mar 25 18:25:49.392: INFO: Deleting pod "pvc-tester-cbzhx" in namespace "pvc-protection-5696" Mar 25 18:25:49.397: INFO: Wait up to 5m0s for pod "pvc-tester-cbzhx" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 25 18:26:07.422: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionvwqrq to be removed Mar 25 18:26:07.425: INFO: Claim "pvc-protectionvwqrq" in namespace "pvc-protection-5696" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:26:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-5696" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:28.567 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":115,"completed":79,"skipped":4872,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:26:07.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 18:26:08.102: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 18:26:08.369: INFO: Default storage class: "standard" Mar 25 18:26:08.369: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 18:26:19.438: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionz6g98] to have phase Bound Mar 25 18:26:19.441: INFO: PersistentVolumeClaim pvc-protectionz6g98 found and phase=Bound (2.940404ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Creating second Pod whose scheduling fails because it uses a PVC that is being deleted Mar 25 18:26:19.474: INFO: Waiting up to 5m0s for pod "pvc-tester-qqmc7" in namespace "pvc-protection-5595" to be "Unschedulable" Mar 25 18:26:19.480: INFO: Pod "pvc-tester-qqmc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.003425ms Mar 25 18:26:21.486: INFO: Pod "pvc-tester-qqmc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012629146s Mar 25 18:26:21.486: INFO: Pod "pvc-tester-qqmc7" satisfied condition "Unschedulable" STEP: Deleting the second pod that uses the PVC that is being deleted Mar 25 18:26:21.512: INFO: Deleting pod "pvc-tester-qqmc7" in namespace "pvc-protection-5595" Mar 25 18:26:21.550: INFO: Wait up to 5m0s for pod "pvc-tester-qqmc7" to be fully deleted STEP: Checking again that the PVC status is Terminating STEP: Deleting the first pod that uses the PVC Mar 25 18:26:21.555: INFO: Deleting pod "pvc-tester-s6fsh" in namespace "pvc-protection-5595" Mar 25 18:26:21.559: INFO: Wait up to 5m0s for pod "pvc-tester-s6fsh" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 25 18:26:35.604: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionz6g98 to be removed Mar 25 18:26:35.606: INFO: Claim "pvc-protectionz6g98" in namespace "pvc-protection-5595" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:26:35.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-5595" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:28.179 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":115,"completed":80,"skipped":4912,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:26:35.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7224 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:26:35.774: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-attacher Mar 25 18:26:35.784: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7224 Mar 25 18:26:35.784: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7224 Mar 25 18:26:35.816: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7224 Mar 25 18:26:35.855: INFO: creating *v1.Role: csi-mock-volumes-7224-1956/external-attacher-cfg-csi-mock-volumes-7224 Mar 25 18:26:35.870: INFO: creating *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-attacher-role-cfg Mar 25 18:26:35.898: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-provisioner Mar 25 18:26:35.914: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7224 Mar 25 18:26:35.914: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7224 Mar 25 18:26:35.925: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7224 Mar 25 18:26:35.937: INFO: creating *v1.Role: csi-mock-volumes-7224-1956/external-provisioner-cfg-csi-mock-volumes-7224 Mar 25 18:26:35.986: INFO: creating *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-provisioner-role-cfg Mar 25 18:26:36.002: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-resizer Mar 25 18:26:36.015: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7224 Mar 25 18:26:36.015: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7224 Mar 25 18:26:36.021: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7224 Mar 25 18:26:36.027: INFO: creating *v1.Role: csi-mock-volumes-7224-1956/external-resizer-cfg-csi-mock-volumes-7224 Mar 25 18:26:36.033: INFO: creating *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-resizer-role-cfg Mar 25 18:26:36.039: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-snapshotter Mar 25 18:26:36.056: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7224 Mar 25 18:26:36.056: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7224 Mar 25 18:26:36.081: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7224 Mar 25 18:26:36.111: INFO: creating *v1.Role: csi-mock-volumes-7224-1956/external-snapshotter-leaderelection-csi-mock-volumes-7224 Mar 25 18:26:36.115: INFO: creating *v1.RoleBinding: csi-mock-volumes-7224-1956/external-snapshotter-leaderelection Mar 25 18:26:36.129: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-mock Mar 25 18:26:36.164: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7224 Mar 25 18:26:36.177: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7224 Mar 25 18:26:36.196: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7224 Mar 25 18:26:36.250: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7224 Mar 25 18:26:36.254: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7224 Mar 25 18:26:36.261: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7224 Mar 25 18:26:36.266: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7224 Mar 25 18:26:36.284: INFO: creating *v1.StatefulSet: csi-mock-volumes-7224-1956/csi-mockplugin Mar 25 18:26:36.339: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7224 Mar 25 18:26:36.398: INFO: creating *v1.StatefulSet: csi-mock-volumes-7224-1956/csi-mockplugin-attacher Mar 25 18:26:36.445: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7224" Mar 25 18:26:36.463: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7224 to register on node latest-worker STEP: Creating pod Mar 25 18:26:51.227: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 25 18:27:11.436: INFO: Deleting pod "pvc-volume-tester-7tjh6" in namespace "csi-mock-volumes-7224" Mar 25 18:27:11.441: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7tjh6" to be fully deleted STEP: Deleting pod pvc-volume-tester-7tjh6 Mar 25 18:27:25.448: INFO: Deleting pod "pvc-volume-tester-7tjh6" in namespace "csi-mock-volumes-7224" STEP: Deleting claim pvc-t8vk8 Mar 25 18:27:25.458: INFO: Waiting up to 2m0s for PersistentVolume pvc-dee3f4d9-3809-4406-9ed8-2c14a8b61d3f to get deleted Mar 25 18:27:25.478: INFO: PersistentVolume pvc-dee3f4d9-3809-4406-9ed8-2c14a8b61d3f found and phase=Bound (19.923993ms) Mar 25 18:27:27.483: INFO: PersistentVolume pvc-dee3f4d9-3809-4406-9ed8-2c14a8b61d3f was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7224 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7224 STEP: Waiting for namespaces [csi-mock-volumes-7224] to vanish STEP: uninstalling csi mock driver Mar 25 18:27:33.658: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-attacher Mar 25 18:27:33.663: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7224 Mar 25 18:27:33.681: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7224 Mar 25 18:27:33.695: INFO: deleting *v1.Role: csi-mock-volumes-7224-1956/external-attacher-cfg-csi-mock-volumes-7224 Mar 25 18:27:33.725: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-attacher-role-cfg Mar 25 18:27:33.786: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-provisioner Mar 25 18:27:33.797: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7224 Mar 25 18:27:33.802: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7224 Mar 25 18:27:33.825: INFO: deleting *v1.Role: csi-mock-volumes-7224-1956/external-provisioner-cfg-csi-mock-volumes-7224 Mar 25 18:27:33.832: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-provisioner-role-cfg Mar 25 18:27:33.844: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-resizer Mar 25 18:27:33.849: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7224 Mar 25 18:27:33.856: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7224 Mar 25 18:27:33.868: INFO: deleting *v1.Role: csi-mock-volumes-7224-1956/external-resizer-cfg-csi-mock-volumes-7224 Mar 25 18:27:33.935: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7224-1956/csi-resizer-role-cfg Mar 25 18:27:33.946: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-snapshotter Mar 25 18:27:33.958: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7224 Mar 25 18:27:33.969: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7224 Mar 25 18:27:33.980: INFO: deleting *v1.Role: csi-mock-volumes-7224-1956/external-snapshotter-leaderelection-csi-mock-volumes-7224 Mar 25 18:27:33.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7224-1956/external-snapshotter-leaderelection Mar 25 18:27:34.032: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7224-1956/csi-mock Mar 25 18:27:34.109: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7224 Mar 25 18:27:34.115: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7224 Mar 25 18:27:34.198: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7224 Mar 25 18:27:34.328: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7224 Mar 25 18:27:34.360: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7224 Mar 25 18:27:34.594: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7224 Mar 25 18:27:34.620: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7224 Mar 25 18:27:34.954: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7224-1956/csi-mockplugin Mar 25 18:27:35.342: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7224 Mar 25 18:27:35.600: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7224-1956/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7224-1956 STEP: Waiting for namespaces [csi-mock-volumes-7224-1956] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:28:07.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:92.062 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":115,"completed":81,"skipped":4916,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:28:07.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 18:28:07.788: INFO: Waiting up to 5m0s for pod "pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688" in namespace "emptydir-9891" to be "Succeeded or Failed" Mar 25 18:28:07.833: INFO: Pod "pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688": Phase="Pending", Reason="", readiness=false. Elapsed: 44.317471ms Mar 25 18:28:09.838: INFO: Pod "pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049695407s Mar 25 18:28:11.842: INFO: Pod "pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053955563s STEP: Saw pod success Mar 25 18:28:11.842: INFO: Pod "pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688" satisfied condition "Succeeded or Failed" Mar 25 18:28:11.848: INFO: Trying to get logs from node latest-worker2 pod pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688 container test-container: STEP: delete the pod Mar 25 18:28:12.068: INFO: Waiting for pod pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688 to disappear Mar 25 18:28:12.137: INFO: Pod pod-f2019a43-907e-4bb9-b5c5-ef6e6b1eb688 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:28:12.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9891" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":115,"completed":82,"skipped":4934,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:28:12.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Mar 25 18:28:12.311: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:28:12.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5211" for this suite. S [SKIPPING] [0.175 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:827 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:28:12.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c" Mar 25 18:28:16.460: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c && dd if=/dev/zero of=/tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c/file] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-latest-worker2-vtdm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:16.460: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:16.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-latest-worker2-vtdm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:16.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:28:16.797: INFO: Creating a PV followed by a PVC Mar 25 18:28:16.809: INFO: Waiting for PV local-pv58zbp to bind to PVC pvc-tsqrz Mar 25 18:28:16.809: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tsqrz] to have phase Bound Mar 25 18:28:16.844: INFO: PersistentVolumeClaim pvc-tsqrz found but phase is Pending instead of Bound. Mar 25 18:28:18.849: INFO: PersistentVolumeClaim pvc-tsqrz found and phase=Bound (2.039754908s) Mar 25 18:28:18.849: INFO: Waiting up to 3m0s for PersistentVolume local-pv58zbp to have phase Bound Mar 25 18:28:18.852: INFO: PersistentVolume local-pv58zbp found and phase=Bound (3.251455ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 18:28:18.858: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:28:18.860: INFO: Deleting PersistentVolumeClaim "pvc-tsqrz" Mar 25 18:28:18.865: INFO: Deleting PersistentVolume "local-pv58zbp" Mar 25 18:28:18.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-latest-worker2-vtdm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:18.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c/file Mar 25 18:28:19.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-latest-worker2-vtdm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:19.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c Mar 25 18:28:19.187: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-389afc74-b0a0-4825-801c-e69395659e3c] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-latest-worker2-vtdm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:19.188: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:28:19.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8711" for this suite. S [SKIPPING] [7.012 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:28:19.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333" Mar 25 18:28:23.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333" "/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333"] Namespace:persistent-local-volumes-test-630 PodName:hostexec-latest-worker2-d6pzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:23.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:28:23.591: INFO: Creating a PV followed by a PVC Mar 25 18:28:23.605: INFO: Waiting for PV local-pvsxvxs to bind to PVC pvc-jmm2f Mar 25 18:28:23.605: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jmm2f] to have phase Bound Mar 25 18:28:23.611: INFO: PersistentVolumeClaim pvc-jmm2f found but phase is Pending instead of Bound. Mar 25 18:28:25.615: INFO: PersistentVolumeClaim pvc-jmm2f found and phase=Bound (2.009927197s) Mar 25 18:28:25.615: INFO: Waiting up to 3m0s for PersistentVolume local-pvsxvxs to have phase Bound Mar 25 18:28:25.618: INFO: PersistentVolume local-pvsxvxs found and phase=Bound (2.549799ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:28:29.713: INFO: pod "pod-36566d61-e310-44e7-a879-7e8432fbf9e2" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:28:29.714: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-630 PodName:pod-36566d61-e310-44e7-a879-7e8432fbf9e2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:28:29.714: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:29.840: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 18:28:29.841: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-630 PodName:pod-36566d61-e310-44e7-a879-7e8432fbf9e2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:28:29.841: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:29.952: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-36566d61-e310-44e7-a879-7e8432fbf9e2 in namespace persistent-local-volumes-test-630 STEP: Creating pod2 STEP: Creating a pod Mar 25 18:28:34.078: INFO: pod "pod-d3621768-b84a-41e3-9822-24a15a197e8e" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 18:28:34.079: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-630 PodName:pod-d3621768-b84a-41e3-9822-24a15a197e8e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:28:34.079: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:34.188: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d3621768-b84a-41e3-9822-24a15a197e8e in namespace persistent-local-volumes-test-630 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:28:34.195: INFO: Deleting PersistentVolumeClaim "pvc-jmm2f" Mar 25 18:28:34.247: INFO: Deleting PersistentVolume "local-pvsxvxs" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333" Mar 25 18:28:34.264: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333"] Namespace:persistent-local-volumes-test-630 PodName:hostexec-latest-worker2-d6pzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:34.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 18:28:34.392: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17d19a16-ca5d-49fe-8dc9-afd1c09ba333] Namespace:persistent-local-volumes-test-630 PodName:hostexec-latest-worker2-d6pzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:28:34.392: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:28:34.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-630" for this suite. • [SLOW TEST:15.180 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":115,"completed":83,"skipped":5131,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:28:34.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4684 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 18:28:34.797: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-attacher Mar 25 18:28:34.800: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4684 Mar 25 18:28:34.800: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4684 Mar 25 18:28:34.805: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4684 Mar 25 18:28:34.809: INFO: creating *v1.Role: csi-mock-volumes-4684-7595/external-attacher-cfg-csi-mock-volumes-4684 Mar 25 18:28:34.825: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-attacher-role-cfg Mar 25 18:28:34.850: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-provisioner Mar 25 18:28:34.861: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4684 Mar 25 18:28:34.861: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4684 Mar 25 18:28:34.875: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4684 Mar 25 18:28:34.881: INFO: creating *v1.Role: csi-mock-volumes-4684-7595/external-provisioner-cfg-csi-mock-volumes-4684 Mar 25 18:28:34.888: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-provisioner-role-cfg Mar 25 18:28:34.910: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-resizer Mar 25 18:28:34.923: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4684 Mar 25 18:28:34.923: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4684 Mar 25 18:28:34.940: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4684 Mar 25 18:28:34.964: INFO: creating *v1.Role: csi-mock-volumes-4684-7595/external-resizer-cfg-csi-mock-volumes-4684 Mar 25 18:28:34.977: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-resizer-role-cfg Mar 25 18:28:34.989: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-snapshotter Mar 25 18:28:35.001: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4684 Mar 25 18:28:35.001: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4684 Mar 25 18:28:35.006: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4684 Mar 25 18:28:35.035: INFO: creating *v1.Role: csi-mock-volumes-4684-7595/external-snapshotter-leaderelection-csi-mock-volumes-4684 Mar 25 18:28:35.061: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-7595/external-snapshotter-leaderelection Mar 25 18:28:35.113: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-mock Mar 25 18:28:35.117: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4684 Mar 25 18:28:35.121: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4684 Mar 25 18:28:35.127: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4684 Mar 25 18:28:35.162: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4684 Mar 25 18:28:35.199: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4684 Mar 25 18:28:35.210: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4684 Mar 25 18:28:35.251: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4684 Mar 25 18:28:35.254: INFO: creating *v1.StatefulSet: csi-mock-volumes-4684-7595/csi-mockplugin Mar 25 18:28:35.264: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4684 Mar 25 18:28:35.299: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4684" Mar 25 18:28:35.335: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4684 to register on node latest-worker2 I0325 18:28:44.162199 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 18:28:44.164264 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4684","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 18:28:44.210766 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0325 18:28:44.255148 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 18:28:44.280688 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4684","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 18:28:44.767012 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4684"},"Error":"","FullError":null} STEP: Creating pod Mar 25 18:28:44.968: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:28:44.986: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-7vwtj] to have phase Bound Mar 25 18:28:45.018: INFO: PersistentVolumeClaim pvc-7vwtj found but phase is Pending instead of Bound. I0325 18:28:45.022331 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 18:28:45.024590 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899"}}},"Error":"","FullError":null} Mar 25 18:28:47.023: INFO: PersistentVolumeClaim pvc-7vwtj found and phase=Bound (2.036546046s) I0325 18:28:47.245893 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 18:28:47.248: INFO: >>> kubeConfig: /root/.kube/config I0325 18:28:47.356462 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899","storage.kubernetes.io/csiProvisionerIdentity":"1616696924297-8081-csi-mock-csi-mock-volumes-4684"}},"Response":{},"Error":"","FullError":null} I0325 18:28:47.363932 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 18:28:47.366: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:47.463: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:28:47.581: INFO: >>> kubeConfig: /root/.kube/config I0325 18:28:47.703051 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899/globalmount","target_path":"/var/lib/kubelet/pods/bcae0f33-b603-4c32-ad20-9419d4c1f86e/volumes/kubernetes.io~csi/pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899","storage.kubernetes.io/csiProvisionerIdentity":"1616696924297-8081-csi-mock-csi-mock-volumes-4684"}},"Response":{},"Error":"","FullError":null} Mar 25 18:28:51.060: INFO: Deleting pod "pvc-volume-tester-qg2jx" in namespace "csi-mock-volumes-4684" Mar 25 18:28:51.065: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qg2jx" to be fully deleted Mar 25 18:28:54.879: INFO: >>> kubeConfig: /root/.kube/config I0325 18:28:55.018601 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/bcae0f33-b603-4c32-ad20-9419d4c1f86e/volumes/kubernetes.io~csi/pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899/mount"},"Response":{},"Error":"","FullError":null} I0325 18:28:55.083819 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 18:28:55.087659 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899/globalmount"},"Response":{},"Error":"","FullError":null} I0325 18:29:37.332610 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 18:29:38.089: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295187", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00553a450), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00553a468)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0010f6450), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010f6460), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 18:29:38.089: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295188", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4684"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00298e270), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00298e288)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00298e2a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00298e2b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001a1b9b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001a1b9c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 18:29:38.089: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295194", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4684"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001dd48b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001dd48d0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001dd48e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001dd4900)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899", StorageClassName:(*string)(0xc004ee4760), VolumeMode:(*v1.PersistentVolumeMode)(0xc004ee4770), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 18:29:38.090: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295195", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4684"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001dd4930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001dd4948)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001dd4960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001dd4978)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899", StorageClassName:(*string)(0xc004ee47a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004ee47b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 18:29:38.090: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295316", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc000136030), DeletionGracePeriodSeconds:(*int64)(0xc001f6bc78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4684"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000136060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0001361b0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0001361f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000136210)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899", StorageClassName:(*string)(0xc001d96070), VolumeMode:(*v1.PersistentVolumeMode)(0xc001d96080), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 18:29:38.090: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-7vwtj", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4684", SelfLink:"", UID:"5ecb7581-d2c2-4748-ae40-eac7b8a2a899", ResourceVersion:"1295327", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752293724, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc000136270), DeletionGracePeriodSeconds:(*int64)(0xc001f6bd28), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4684"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000136288), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0001362a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0001362b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0001362d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5ecb7581-d2c2-4748-ae40-eac7b8a2a899", StorageClassName:(*string)(0xc001d960c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001d960d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-qg2jx Mar 25 18:29:38.090: INFO: Deleting pod "pvc-volume-tester-qg2jx" in namespace "csi-mock-volumes-4684" STEP: Deleting claim pvc-7vwtj STEP: Deleting storageclass csi-mock-volumes-4684-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4684 STEP: Waiting for namespaces [csi-mock-volumes-4684] to vanish STEP: uninstalling csi mock driver Mar 25 18:29:44.130: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-attacher Mar 25 18:29:44.136: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4684 Mar 25 18:29:44.142: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4684 Mar 25 18:29:44.169: INFO: deleting *v1.Role: csi-mock-volumes-4684-7595/external-attacher-cfg-csi-mock-volumes-4684 Mar 25 18:29:44.179: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-attacher-role-cfg Mar 25 18:29:44.189: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-provisioner Mar 25 18:29:44.196: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4684 Mar 25 18:29:44.215: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4684 Mar 25 18:29:44.227: INFO: deleting *v1.Role: csi-mock-volumes-4684-7595/external-provisioner-cfg-csi-mock-volumes-4684 Mar 25 18:29:44.233: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-provisioner-role-cfg Mar 25 18:29:44.238: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-resizer Mar 25 18:29:44.244: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4684 Mar 25 18:29:44.251: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4684 Mar 25 18:29:44.294: INFO: deleting *v1.Role: csi-mock-volumes-4684-7595/external-resizer-cfg-csi-mock-volumes-4684 Mar 25 18:29:44.299: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-7595/csi-resizer-role-cfg Mar 25 18:29:44.342: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-snapshotter Mar 25 18:29:44.433: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4684 Mar 25 18:29:44.459: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4684 Mar 25 18:29:44.478: INFO: deleting *v1.Role: csi-mock-volumes-4684-7595/external-snapshotter-leaderelection-csi-mock-volumes-4684 Mar 25 18:29:44.484: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-7595/external-snapshotter-leaderelection Mar 25 18:29:44.508: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-7595/csi-mock Mar 25 18:29:44.566: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4684 Mar 25 18:29:44.612: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4684 Mar 25 18:29:44.627: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4684 Mar 25 18:29:44.634: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4684 Mar 25 18:29:44.646: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4684 Mar 25 18:29:44.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4684 Mar 25 18:29:44.712: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4684 Mar 25 18:29:44.731: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4684-7595/csi-mockplugin Mar 25 18:29:44.737: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4684 STEP: deleting the driver namespace: csi-mock-volumes-4684-7595 STEP: Waiting for namespaces [csi-mock-volumes-4684-7595] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:30:40.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:126.256 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":115,"completed":84,"skipped":5156,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:30:40.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-7584 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:30:40.945: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-attacher Mar 25 18:30:40.959: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7584 Mar 25 18:30:40.959: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7584 Mar 25 18:30:40.963: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7584 Mar 25 18:30:40.969: INFO: creating *v1.Role: csi-mock-volumes-7584-2917/external-attacher-cfg-csi-mock-volumes-7584 Mar 25 18:30:40.991: INFO: creating *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-attacher-role-cfg Mar 25 18:30:41.005: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-provisioner Mar 25 18:30:41.020: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7584 Mar 25 18:30:41.020: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7584 Mar 25 18:30:41.034: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7584 Mar 25 18:30:41.042: INFO: creating *v1.Role: csi-mock-volumes-7584-2917/external-provisioner-cfg-csi-mock-volumes-7584 Mar 25 18:30:41.047: INFO: creating *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-provisioner-role-cfg Mar 25 18:30:41.099: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-resizer Mar 25 18:30:41.112: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7584 Mar 25 18:30:41.112: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7584 Mar 25 18:30:41.118: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7584 Mar 25 18:30:41.134: INFO: creating *v1.Role: csi-mock-volumes-7584-2917/external-resizer-cfg-csi-mock-volumes-7584 Mar 25 18:30:41.149: INFO: creating *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-resizer-role-cfg Mar 25 18:30:41.183: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-snapshotter Mar 25 18:30:41.222: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7584 Mar 25 18:30:41.222: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7584 Mar 25 18:30:41.232: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7584 Mar 25 18:30:41.266: INFO: creating *v1.Role: csi-mock-volumes-7584-2917/external-snapshotter-leaderelection-csi-mock-volumes-7584 Mar 25 18:30:41.296: INFO: creating *v1.RoleBinding: csi-mock-volumes-7584-2917/external-snapshotter-leaderelection Mar 25 18:30:41.311: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-mock Mar 25 18:30:41.316: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7584 Mar 25 18:30:41.322: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7584 Mar 25 18:30:41.349: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7584 Mar 25 18:30:41.368: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7584 Mar 25 18:30:41.422: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7584 Mar 25 18:30:41.436: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7584 Mar 25 18:30:41.442: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7584 Mar 25 18:30:41.448: INFO: creating *v1.StatefulSet: csi-mock-volumes-7584-2917/csi-mockplugin Mar 25 18:30:41.493: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7584 Mar 25 18:30:41.512: INFO: creating *v1.StatefulSet: csi-mock-volumes-7584-2917/csi-mockplugin-attacher Mar 25 18:30:41.542: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7584" Mar 25 18:30:41.679: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7584 to register on node latest-worker STEP: Creating pod Mar 25 18:30:51.554: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:30:51.564: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-dth8b] to have phase Bound Mar 25 18:30:51.581: INFO: PersistentVolumeClaim pvc-dth8b found but phase is Pending instead of Bound. Mar 25 18:30:53.586: INFO: PersistentVolumeClaim pvc-dth8b found and phase=Bound (2.021935428s) STEP: checking for CSIInlineVolumes feature Mar 25 18:31:15.668: INFO: Pod inline-volume-5mmcr has the following logs: Mar 25 18:31:15.711: INFO: Deleting pod "inline-volume-5mmcr" in namespace "csi-mock-volumes-7584" Mar 25 18:31:15.742: INFO: Wait up to 5m0s for pod "inline-volume-5mmcr" to be fully deleted STEP: Deleting the previously created pod Mar 25 18:31:25.787: INFO: Deleting pod "pvc-volume-tester-5xjf4" in namespace "csi-mock-volumes-7584" Mar 25 18:31:25.792: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5xjf4" to be fully deleted STEP: Checking CSI driver logs Mar 25 18:32:05.867: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 99a67749-f644-4aaa-b677-f10839e07f8b Mar 25 18:32:05.867: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Mar 25 18:32:05.867: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Mar 25 18:32:05.867: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-5xjf4 Mar 25 18:32:05.867: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7584 Mar 25 18:32:05.867: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/99a67749-f644-4aaa-b677-f10839e07f8b/volumes/kubernetes.io~csi/pvc-8467bc9d-1388-4491-b6fc-386835bb4b4a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-5xjf4 Mar 25 18:32:05.867: INFO: Deleting pod "pvc-volume-tester-5xjf4" in namespace "csi-mock-volumes-7584" STEP: Deleting claim pvc-dth8b Mar 25 18:32:05.878: INFO: Waiting up to 2m0s for PersistentVolume pvc-8467bc9d-1388-4491-b6fc-386835bb4b4a to get deleted Mar 25 18:32:05.894: INFO: PersistentVolume pvc-8467bc9d-1388-4491-b6fc-386835bb4b4a found and phase=Bound (16.573162ms) Mar 25 18:32:07.898: INFO: PersistentVolume pvc-8467bc9d-1388-4491-b6fc-386835bb4b4a was removed STEP: Deleting storageclass csi-mock-volumes-7584-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7584 STEP: Waiting for namespaces [csi-mock-volumes-7584] to vanish STEP: uninstalling csi mock driver Mar 25 18:32:13.931: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-attacher Mar 25 18:32:13.937: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7584 Mar 25 18:32:14.056: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7584 Mar 25 18:32:14.061: INFO: deleting *v1.Role: csi-mock-volumes-7584-2917/external-attacher-cfg-csi-mock-volumes-7584 Mar 25 18:32:14.106: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-attacher-role-cfg Mar 25 18:32:14.239: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-provisioner Mar 25 18:32:14.280: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7584 Mar 25 18:32:14.457: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7584 Mar 25 18:32:14.476: INFO: deleting *v1.Role: csi-mock-volumes-7584-2917/external-provisioner-cfg-csi-mock-volumes-7584 Mar 25 18:32:14.506: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-provisioner-role-cfg Mar 25 18:32:14.525: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-resizer Mar 25 18:32:14.554: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7584 Mar 25 18:32:14.566: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7584 Mar 25 18:32:14.577: INFO: deleting *v1.Role: csi-mock-volumes-7584-2917/external-resizer-cfg-csi-mock-volumes-7584 Mar 25 18:32:14.601: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7584-2917/csi-resizer-role-cfg Mar 25 18:32:14.638: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-snapshotter Mar 25 18:32:14.750: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7584 Mar 25 18:32:14.761: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7584 Mar 25 18:32:14.769: INFO: deleting *v1.Role: csi-mock-volumes-7584-2917/external-snapshotter-leaderelection-csi-mock-volumes-7584 Mar 25 18:32:14.776: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7584-2917/external-snapshotter-leaderelection Mar 25 18:32:14.781: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7584-2917/csi-mock Mar 25 18:32:14.801: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7584 Mar 25 18:32:14.811: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7584 Mar 25 18:32:14.823: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7584 Mar 25 18:32:14.830: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7584 Mar 25 18:32:14.835: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7584 Mar 25 18:32:14.859: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7584 Mar 25 18:32:14.872: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7584 Mar 25 18:32:14.883: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7584-2917/csi-mockplugin Mar 25 18:32:14.896: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7584 Mar 25 18:32:14.968: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7584-2917/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7584-2917 STEP: Waiting for namespaces [csi-mock-volumes-7584-2917] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:33:11.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:150.254 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":115,"completed":85,"skipped":5185,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv3 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:33:11.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 18:33:11.133: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:33:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8090" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.114 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:33:11.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-4656 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:33:11.362: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-attacher Mar 25 18:33:11.365: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4656 Mar 25 18:33:11.365: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4656 Mar 25 18:33:11.392: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4656 Mar 25 18:33:11.395: INFO: creating *v1.Role: csi-mock-volumes-4656-7722/external-attacher-cfg-csi-mock-volumes-4656 Mar 25 18:33:11.399: INFO: creating *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-attacher-role-cfg Mar 25 18:33:11.405: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-provisioner Mar 25 18:33:11.427: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4656 Mar 25 18:33:11.427: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4656 Mar 25 18:33:11.441: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4656 Mar 25 18:33:11.468: INFO: creating *v1.Role: csi-mock-volumes-4656-7722/external-provisioner-cfg-csi-mock-volumes-4656 Mar 25 18:33:11.489: INFO: creating *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-provisioner-role-cfg Mar 25 18:33:11.525: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-resizer Mar 25 18:33:11.528: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4656 Mar 25 18:33:11.528: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4656 Mar 25 18:33:11.531: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4656 Mar 25 18:33:11.537: INFO: creating *v1.Role: csi-mock-volumes-4656-7722/external-resizer-cfg-csi-mock-volumes-4656 Mar 25 18:33:11.585: INFO: creating *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-resizer-role-cfg Mar 25 18:33:11.617: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-snapshotter Mar 25 18:33:11.667: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4656 Mar 25 18:33:11.667: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4656 Mar 25 18:33:11.672: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4656 Mar 25 18:33:11.676: INFO: creating *v1.Role: csi-mock-volumes-4656-7722/external-snapshotter-leaderelection-csi-mock-volumes-4656 Mar 25 18:33:11.681: INFO: creating *v1.RoleBinding: csi-mock-volumes-4656-7722/external-snapshotter-leaderelection Mar 25 18:33:11.696: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-mock Mar 25 18:33:11.723: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4656 Mar 25 18:33:11.729: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4656 Mar 25 18:33:11.750: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4656 Mar 25 18:33:11.800: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4656 Mar 25 18:33:11.806: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4656 Mar 25 18:33:11.812: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4656 Mar 25 18:33:11.818: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4656 Mar 25 18:33:11.824: INFO: creating *v1.StatefulSet: csi-mock-volumes-4656-7722/csi-mockplugin Mar 25 18:33:11.852: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4656 Mar 25 18:33:11.869: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4656" Mar 25 18:33:11.893: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4656 to register on node latest-worker2 STEP: Creating pod with fsGroup Mar 25 18:33:21.963: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:33:21.999: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-n4b7v] to have phase Bound Mar 25 18:33:22.011: INFO: PersistentVolumeClaim pvc-n4b7v found but phase is Pending instead of Bound. Mar 25 18:33:24.015: INFO: PersistentVolumeClaim pvc-n4b7v found and phase=Bound (2.016368823s) Mar 25 18:33:28.041: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-4656] Namespace:csi-mock-volumes-4656 PodName:pvc-volume-tester-5mz88 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:33:28.041: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:33:28.183: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-4656/csi-mock-volumes-4656'; sync] Namespace:csi-mock-volumes-4656 PodName:pvc-volume-tester-5mz88 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:33:28.183: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:33:57.610: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-4656/csi-mock-volumes-4656] Namespace:csi-mock-volumes-4656 PodName:pvc-volume-tester-5mz88 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:33:57.610: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:33:57.734: INFO: pod csi-mock-volumes-4656/pvc-volume-tester-5mz88 exec for cmd ls -l /mnt/test/csi-mock-volumes-4656/csi-mock-volumes-4656, stdout: -rw-r--r-- 1 root root 13 Mar 25 18:33 /mnt/test/csi-mock-volumes-4656/csi-mock-volumes-4656, stderr: Mar 25 18:33:57.734: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-4656] Namespace:csi-mock-volumes-4656 PodName:pvc-volume-tester-5mz88 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:33:57.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-5mz88 Mar 25 18:33:57.836: INFO: Deleting pod "pvc-volume-tester-5mz88" in namespace "csi-mock-volumes-4656" Mar 25 18:33:57.841: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5mz88" to be fully deleted STEP: Deleting claim pvc-n4b7v Mar 25 18:34:35.861: INFO: Waiting up to 2m0s for PersistentVolume pvc-b6a3a605-8f96-4267-a50c-c96312439fa5 to get deleted Mar 25 18:34:35.880: INFO: PersistentVolume pvc-b6a3a605-8f96-4267-a50c-c96312439fa5 found and phase=Bound (19.235168ms) Mar 25 18:34:37.884: INFO: PersistentVolume pvc-b6a3a605-8f96-4267-a50c-c96312439fa5 was removed STEP: Deleting storageclass csi-mock-volumes-4656-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4656 STEP: Waiting for namespaces [csi-mock-volumes-4656] to vanish STEP: uninstalling csi mock driver Mar 25 18:34:43.905: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-attacher Mar 25 18:34:43.911: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4656 Mar 25 18:34:43.934: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4656 Mar 25 18:34:43.954: INFO: deleting *v1.Role: csi-mock-volumes-4656-7722/external-attacher-cfg-csi-mock-volumes-4656 Mar 25 18:34:43.960: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-attacher-role-cfg Mar 25 18:34:43.967: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-provisioner Mar 25 18:34:43.973: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4656 Mar 25 18:34:43.979: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4656 Mar 25 18:34:43.989: INFO: deleting *v1.Role: csi-mock-volumes-4656-7722/external-provisioner-cfg-csi-mock-volumes-4656 Mar 25 18:34:44.012: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-provisioner-role-cfg Mar 25 18:34:44.016: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-resizer Mar 25 18:34:44.021: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4656 Mar 25 18:34:44.036: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4656 Mar 25 18:34:44.056: INFO: deleting *v1.Role: csi-mock-volumes-4656-7722/external-resizer-cfg-csi-mock-volumes-4656 Mar 25 18:34:44.063: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4656-7722/csi-resizer-role-cfg Mar 25 18:34:44.068: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-snapshotter Mar 25 18:34:44.075: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4656 Mar 25 18:34:44.080: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4656 Mar 25 18:34:44.091: INFO: deleting *v1.Role: csi-mock-volumes-4656-7722/external-snapshotter-leaderelection-csi-mock-volumes-4656 Mar 25 18:34:44.098: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4656-7722/external-snapshotter-leaderelection Mar 25 18:34:44.103: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4656-7722/csi-mock Mar 25 18:34:44.144: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4656 Mar 25 18:34:44.152: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4656 Mar 25 18:34:44.163: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4656 Mar 25 18:34:44.188: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4656 Mar 25 18:34:44.194: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4656 Mar 25 18:34:44.200: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4656 Mar 25 18:34:44.205: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4656 Mar 25 18:34:44.214: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4656-7722/csi-mockplugin Mar 25 18:34:44.219: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4656 STEP: deleting the driver namespace: csi-mock-volumes-4656-7722 STEP: Waiting for namespaces [csi-mock-volumes-4656-7722] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:35:40.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:149.261 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":115,"completed":86,"skipped":5305,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:35:40.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d" Mar 25 18:35:44.551: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d && dd if=/dev/zero of=/tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d/file] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:44.551: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:44.760: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:44.760: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:44.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d && chmod o+rwx /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:44.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:35:45.282: INFO: Creating a PV followed by a PVC Mar 25 18:35:45.308: INFO: Waiting for PV local-pvqmf68 to bind to PVC pvc-nhgtj Mar 25 18:35:45.308: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-nhgtj] to have phase Bound Mar 25 18:35:45.320: INFO: PersistentVolumeClaim pvc-nhgtj found but phase is Pending instead of Bound. Mar 25 18:35:47.324: INFO: PersistentVolumeClaim pvc-nhgtj found and phase=Bound (2.016226148s) Mar 25 18:35:47.324: INFO: Waiting up to 3m0s for PersistentVolume local-pvqmf68 to have phase Bound Mar 25 18:35:47.328: INFO: PersistentVolume local-pvqmf68 found and phase=Bound (3.473195ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:35:51.373: INFO: pod "pod-42674296-bdca-4562-bdd6-8e7cdd712f52" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 18:35:51.373: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3096 PodName:pod-42674296-bdca-4562-bdd6-8e7cdd712f52 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:35:51.373: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:51.494: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 18:35:51.494: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3096 PodName:pod-42674296-bdca-4562-bdd6-8e7cdd712f52 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:35:51.494: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:51.592: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 18:35:51.592: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3096 PodName:pod-42674296-bdca-4562-bdd6-8e7cdd712f52 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:35:51.593: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:51.692: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-42674296-bdca-4562-bdd6-8e7cdd712f52 in namespace persistent-local-volumes-test-3096 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:35:51.698: INFO: Deleting PersistentVolumeClaim "pvc-nhgtj" Mar 25 18:35:51.719: INFO: Deleting PersistentVolume "local-pvqmf68" Mar 25 18:35:51.735: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:51.735: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:35:51.920: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:51.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d/file Mar 25 18:35:52.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:52.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d Mar 25 18:35:52.133: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f6cafeb5-888d-4684-b235-3080c8fe732d] Namespace:persistent-local-volumes-test-3096 PodName:hostexec-latest-worker2-d969f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:35:52.133: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:35:52.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3096" for this suite. • [SLOW TEST:11.848 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":87,"skipped":5371,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:35:52.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 25 18:35:52.381: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 25 18:35:52.392: INFO: Waiting up to 30s for PersistentVolume hostpath-4ktpg to have phase Available Mar 25 18:35:52.398: INFO: PersistentVolume hostpath-4ktpg found but phase is Pending instead of Available. Mar 25 18:35:53.407: INFO: PersistentVolume hostpath-4ktpg found and phase=Available (1.014645277s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Mar 25 18:35:53.414: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-n6cf8] to have phase Bound Mar 25 18:35:53.422: INFO: PersistentVolumeClaim pvc-n6cf8 found but phase is Pending instead of Bound. Mar 25 18:35:55.425: INFO: PersistentVolumeClaim pvc-n6cf8 found and phase=Bound (2.010651688s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Mar 25 18:35:55.450: INFO: Waiting up to 3m0s for PersistentVolume hostpath-4ktpg to get deleted Mar 25 18:35:55.453: INFO: PersistentVolume hostpath-4ktpg found and phase=Bound (2.822408ms) Mar 25 18:35:57.494: INFO: PersistentVolume hostpath-4ktpg was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:35:57.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-6941" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 25 18:35:57.575: INFO: AfterEach: Cleaning up test resources. Mar 25 18:35:57.575: INFO: Deleting PersistentVolumeClaim "pvc-n6cf8" Mar 25 18:35:57.581: INFO: Deleting PersistentVolume "hostpath-4ktpg" • [SLOW TEST:5.331 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":115,"completed":88,"skipped":5382,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:35:57.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4793 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 18:35:58.539: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-attacher Mar 25 18:35:58.542: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4793 Mar 25 18:35:58.542: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4793 Mar 25 18:35:58.551: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4793 Mar 25 18:35:58.557: INFO: creating *v1.Role: csi-mock-volumes-4793-516/external-attacher-cfg-csi-mock-volumes-4793 Mar 25 18:35:58.634: INFO: creating *v1.RoleBinding: csi-mock-volumes-4793-516/csi-attacher-role-cfg Mar 25 18:35:58.670: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-provisioner Mar 25 18:35:58.701: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4793 Mar 25 18:35:58.701: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4793 Mar 25 18:35:59.043: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4793 Mar 25 18:35:59.047: INFO: creating *v1.Role: csi-mock-volumes-4793-516/external-provisioner-cfg-csi-mock-volumes-4793 Mar 25 18:35:59.073: INFO: creating *v1.RoleBinding: csi-mock-volumes-4793-516/csi-provisioner-role-cfg Mar 25 18:35:59.103: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-resizer Mar 25 18:35:59.114: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4793 Mar 25 18:35:59.114: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4793 Mar 25 18:35:59.135: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4793 Mar 25 18:35:59.173: INFO: creating *v1.Role: csi-mock-volumes-4793-516/external-resizer-cfg-csi-mock-volumes-4793 Mar 25 18:35:59.187: INFO: creating *v1.RoleBinding: csi-mock-volumes-4793-516/csi-resizer-role-cfg Mar 25 18:35:59.192: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-snapshotter Mar 25 18:35:59.198: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4793 Mar 25 18:35:59.198: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4793 Mar 25 18:35:59.204: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4793 Mar 25 18:35:59.210: INFO: creating *v1.Role: csi-mock-volumes-4793-516/external-snapshotter-leaderelection-csi-mock-volumes-4793 Mar 25 18:35:59.232: INFO: creating *v1.RoleBinding: csi-mock-volumes-4793-516/external-snapshotter-leaderelection Mar 25 18:35:59.246: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-mock Mar 25 18:35:59.316: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4793 Mar 25 18:35:59.330: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4793 Mar 25 18:35:59.342: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4793 Mar 25 18:35:59.360: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4793 Mar 25 18:35:59.379: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4793 Mar 25 18:35:59.454: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4793 Mar 25 18:35:59.459: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4793 Mar 25 18:35:59.464: INFO: creating *v1.StatefulSet: csi-mock-volumes-4793-516/csi-mockplugin Mar 25 18:35:59.470: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4793 Mar 25 18:35:59.490: INFO: creating *v1.StatefulSet: csi-mock-volumes-4793-516/csi-mockplugin-attacher Mar 25 18:35:59.508: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4793" Mar 25 18:35:59.579: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4793 to register on node latest-worker2 STEP: Creating pod Mar 25 18:36:14.306: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 18:36:34.387: FAIL: pod unexpectedly started to run Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 +0xad9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00331cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00331cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Deleting pod pvc-volume-tester-x7z2l Mar 25 18:36:34.388: INFO: Deleting pod "pvc-volume-tester-x7z2l" in namespace "csi-mock-volumes-4793" Mar 25 18:36:34.395: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x7z2l" to be fully deleted STEP: Deleting claim pvc-n8vpg Mar 25 18:37:36.426: INFO: Waiting up to 2m0s for PersistentVolume pvc-ff3d3ece-0834-48ce-9ef0-6491c9d1cede to get deleted Mar 25 18:37:36.434: INFO: PersistentVolume pvc-ff3d3ece-0834-48ce-9ef0-6491c9d1cede found and phase=Bound (7.600521ms) Mar 25 18:37:38.439: INFO: PersistentVolume pvc-ff3d3ece-0834-48ce-9ef0-6491c9d1cede was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4793 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4793 STEP: Waiting for namespaces [csi-mock-volumes-4793] to vanish STEP: uninstalling csi mock driver Mar 25 18:37:44.458: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-attacher Mar 25 18:37:44.464: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4793 Mar 25 18:37:44.472: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4793 Mar 25 18:37:44.514: INFO: deleting *v1.Role: csi-mock-volumes-4793-516/external-attacher-cfg-csi-mock-volumes-4793 Mar 25 18:37:44.533: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4793-516/csi-attacher-role-cfg Mar 25 18:37:44.538: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-provisioner Mar 25 18:37:44.543: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4793 Mar 25 18:37:44.560: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4793 Mar 25 18:37:44.567: INFO: deleting *v1.Role: csi-mock-volumes-4793-516/external-provisioner-cfg-csi-mock-volumes-4793 Mar 25 18:37:44.573: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4793-516/csi-provisioner-role-cfg Mar 25 18:37:44.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-resizer Mar 25 18:37:44.585: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4793 Mar 25 18:37:44.635: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4793 Mar 25 18:37:44.646: INFO: deleting *v1.Role: csi-mock-volumes-4793-516/external-resizer-cfg-csi-mock-volumes-4793 Mar 25 18:37:44.651: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4793-516/csi-resizer-role-cfg Mar 25 18:37:44.657: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-snapshotter Mar 25 18:37:44.663: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4793 Mar 25 18:37:44.675: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4793 Mar 25 18:37:44.686: INFO: deleting *v1.Role: csi-mock-volumes-4793-516/external-snapshotter-leaderelection-csi-mock-volumes-4793 Mar 25 18:37:44.693: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4793-516/external-snapshotter-leaderelection Mar 25 18:37:44.710: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4793-516/csi-mock Mar 25 18:37:44.723: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4793 Mar 25 18:37:44.733: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4793 Mar 25 18:37:44.756: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4793 Mar 25 18:37:44.766: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4793 Mar 25 18:37:44.771: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4793 Mar 25 18:37:44.777: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4793 Mar 25 18:37:44.796: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4793 Mar 25 18:37:44.801: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4793-516/csi-mockplugin Mar 25 18:37:44.807: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4793 Mar 25 18:37:44.843: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4793-516/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4793-516 STEP: Waiting for namespaces [csi-mock-volumes-4793-516] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:38:40.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [163.310 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 18:36:34.387: pod unexpectedly started to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":115,"completed":88,"skipped":5404,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should schedule pods in the same zones as statically provisioned PVs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:38:40.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 25 18:38:40.969: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:38:40.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-5709" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.089 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:38:40.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Mar 25 18:38:41.064: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1547" to be "Succeeded or Failed" Mar 25 18:38:41.081: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.319085ms Mar 25 18:38:43.091: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026577616s Mar 25 18:38:45.114: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050557705s Mar 25 18:38:47.119: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055420767s STEP: Saw pod success Mar 25 18:38:47.119: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 18:38:47.122: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 25 18:38:47.288: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 18:38:47.345: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:38:47.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1547" for this suite. • [SLOW TEST:6.368 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":115,"completed":89,"skipped":5467,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 18:38:47.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 18:38:49.528: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3dd09d64-cce1-4912-94f8-7f5b7cb71a47] Namespace:persistent-local-volumes-test-2079 PodName:hostexec-latest-worker-qghbl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:38:49.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 18:38:49.630: INFO: Creating a PV followed by a PVC Mar 25 18:38:49.639: INFO: Waiting for PV local-pvkt8g7 to bind to PVC pvc-48r9b Mar 25 18:38:49.639: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-48r9b] to have phase Bound Mar 25 18:38:49.651: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:38:51.659: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:38:53.664: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:38:55.668: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:38:57.673: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:38:59.678: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:39:01.682: INFO: PersistentVolumeClaim pvc-48r9b found but phase is Pending instead of Bound. Mar 25 18:39:03.687: INFO: PersistentVolumeClaim pvc-48r9b found and phase=Bound (14.047956557s) Mar 25 18:39:03.687: INFO: Waiting up to 3m0s for PersistentVolume local-pvkt8g7 to have phase Bound Mar 25 18:39:03.689: INFO: PersistentVolume local-pvkt8g7 found and phase=Bound (2.51487ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 18:39:07.789: INFO: pod "pod-07c2d087-fd95-4435-8c98-c74f6e3dc949" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 18:39:07.789: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2079 PodName:pod-07c2d087-fd95-4435-8c98-c74f6e3dc949 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:39:07.789: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:39:07.921: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 18:39:07.921: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2079 PodName:pod-07c2d087-fd95-4435-8c98-c74f6e3dc949 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:39:07.921: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:39:08.022: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 18:39:08.022: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3dd09d64-cce1-4912-94f8-7f5b7cb71a47 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2079 PodName:pod-07c2d087-fd95-4435-8c98-c74f6e3dc949 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 18:39:08.022: INFO: >>> kubeConfig: /root/.kube/config Mar 25 18:39:08.125: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3dd09d64-cce1-4912-94f8-7f5b7cb71a47 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-07c2d087-fd95-4435-8c98-c74f6e3dc949 in namespace persistent-local-volumes-test-2079 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 18:39:08.132: INFO: Deleting PersistentVolumeClaim "pvc-48r9b" Mar 25 18:39:08.191: INFO: Deleting PersistentVolume "local-pvkt8g7" STEP: Removing the test directory Mar 25 18:39:08.195: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3dd09d64-cce1-4912-94f8-7f5b7cb71a47] Namespace:persistent-local-volumes-test-2079 PodName:hostexec-latest-worker-qghbl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 18:39:08.195: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 18:39:08.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2079" for this suite. • [SLOW TEST:20.958 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":115,"completed":90,"skipped":5634,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSMar 25 18:39:08.313: INFO: Running AfterSuite actions on all nodes Mar 25 18:39:08.313: INFO: Running AfterSuite actions on node 1 Mar 25 18:39:08.313: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage/junit_01.xml {"msg":"Test Suite completed","total":115,"completed":90,"skipped":5643,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} Summarizing 4 Failures: [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 Ran 94 of 5737 Specs in 4757.581 seconds FAIL! -- 90 Passed | 4 Failed | 0 Pending | 5643 Skipped --- FAIL: TestE2E (4757.67s) FAIL