Add CSI driver for gluster block (loopback device)#105
Add CSI driver for gluster block (loopback device)#105poornimag wants to merge 1 commit intogluster:masterfrom
Conversation
|
Can one of the admins verify this patch? |
|
add to whitelist |
|
@poornimag CI is failing |
|
|
||
| # Install dependencies | ||
| RUN yum -y install centos-release-gluster && \ | ||
| yum -y install glusterfs-fuse kmod-xfs xfsdump xfsprogs dmapi -y && \ |
There was a problem hiding this comment.
I do not think there is a kmod-xfs or dmapi package in CentOS.
| ARG builddate="(unknown)" | ||
|
|
||
| LABEL build-date="${builddate}" | ||
| LABEL io.k8s.description="FUSE-based CSI driver for Gluster file access" |
There was a problem hiding this comment.
"for Gluster file acces"? Both here and a few lines below.
| @@ -0,0 +1,321 @@ | |||
| package glusterblock | |||
There was a problem hiding this comment.
missing copyright notice.
|
Is there a design or README for this gluster-block feature? |
| mountPath: /var/lib/csi/sockets/pluginproxy/ | ||
|
|
||
| - name: glusterblock | ||
| image: docker.io/poornimag/glusterblock-csi-driver |
There was a problem hiding this comment.
image name needs to be updated to use docker.io/gluster
There was a problem hiding this comment.
this is applicable in all places
| args: | ||
| - "--nodeid=$(NODE_ID)" | ||
| - "--endpoint=$(CSI_ENDPOINT)" | ||
| - "--resturl=$(REST_URL)" |
There was a problem hiding this comment.
as of now, we are not communicating with gd2. I think we can remove this argument
There was a problem hiding this comment.
this is applicable in all places
| fieldPath: spec.nodeName | ||
| - name: CSI_ENDPOINT | ||
| value: unix://plugin/csi.sock | ||
| - name: REST_URL |
There was a problem hiding this comment.
this is applicable in all places
| return nil, status.Error(codes.AlreadyExists, "block volume already exists") | ||
| } | ||
| fileName := hostPath + volumeName | ||
| _, err = os.Create(fileName) |
There was a problem hiding this comment.
before creating the new files, I think we need to check free spaces on the BHV.
|
|
||
| if _, err = os.Stat(hostPath); os.IsNotExist(err) { | ||
| glog.Errorf("failed to create block volume as the block hosting path doesn't exist: %v", err) | ||
| return nil, err |
There was a problem hiding this comment.
the error should be like
return nil, status.Error(codes.InvalidArgument, err)
There was a problem hiding this comment.
this is applicable in all places, we need to return applicable status codes while returning errors
| } | ||
| if _, err = os.Stat(hostPath + volumeName); !os.IsNotExist(err) { | ||
| glog.Errorf("block volume %s already exixts") | ||
| return nil, status.Error(codes.AlreadyExists, "block volume already exists") |
There was a problem hiding this comment.
if block volume already exists, we need to send back the success response
| return nil, err | ||
| } | ||
| if _, err = os.Stat(hostPath + volumeName); !os.IsNotExist(err) { | ||
| glog.Errorf("block volume %s already exixts") |
| if err == nil { | ||
| deviceName := strings.Split(string(out), " \n") | ||
| device = deviceName[0] | ||
| } |
There was a problem hiding this comment.
what happens when we get some error here?
| hostPath := cs.Config.BlockHostPath | ||
| fileName := hostPath + volumeID | ||
| err := os.Remove(fileName) | ||
| if err != nil { |
There was a problem hiding this comment.
if fileName not found we need to return success response
|
@poornimag any update on this PR? |
|
@poornimag Please have an issue filed and capture the same in the commit message. We'd like to get this targeted for GCS/0.5 release. |
aravindavk
left a comment
There was a problem hiding this comment.
I see BlockHost path is accepted as config parameter. Who will create the block hosting volume?
|
@aravindavk earlier assumption was operator will create BHV and pass it as an argument during the CSI driver initialization the design has changed so that CSI driver will take care of BHV. @poornimag will be updating PR with design doc. |
46529dd to
ea68c8f
Compare
|
I don't see any progress on this PR yet. Please note we're targeting this PR for GCS/0.5. I'd also like to see an overall end to end consumption workflow based out of GCS stack for the loopback based rwo story somewhere, most probably in GCS wiki. |
ea68c8f to
030e9cd
Compare
This patch intends to add glusterblock csi driver which exports a loopback device formatted as xfs. The current implementaion expects a mount point as a parameter, where the block devices will be created. The original patch was taken from [1] [1] gluster/gluster-csi-driver@master...Madhu-1:lo-block Co-Authored-by: Poornima G <pgurusid@redhat.com> Signed-off-by: Poornima G <pgurusid@redhat.com>
030e9cd to
8010ccf
Compare
|
@poornimag is there any benchmark available for PV create time for this model? Would be great to see some graphs on PV create rate, and delete rate, may be upto 50 or 100 PVs? If yes, can you share it here, that would help people to understand the benefit of this approach. |
Did some testing with this patch on the scalability of the block devices backed by loopback file hosted on Gluster volume. Here is the link to setup steps and the scale testing results. This patch shall be re-written to use latest gd2 capabilities, will add blog and more details once we get the newer version. |
|
3360 PVs in ~45 minutes, with a constant rate... very nice. 🎉 |
|
This is being implemented via #150. |
This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.
The original patch was taken from [1]
[1] master...Madhu-1:lo-block
Co-Authored-by: Poornima G pgurusid@redhat.com
Signed-off-by: Poornima G pgurusid@redhat.com
Describe what this PR does
Provide some context for the reviewer
Is there anything that requires special attention?
Do you have any questions? Did you do something clever?
Related issues:
Mention any github issues relevant to this PR