Skip to content

Add CSI driver for gluster block (loopback device)#105

Closed
poornimag wants to merge 1 commit intogluster:masterfrom
poornimag:gluster-block-loopback
Closed

Add CSI driver for gluster block (loopback device)#105
poornimag wants to merge 1 commit intogluster:masterfrom
poornimag:gluster-block-loopback

Conversation

@poornimag
Copy link
Copy Markdown

This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.

The original patch was taken from [1]

[1] master...Madhu-1:lo-block

Co-Authored-by: Poornima G pgurusid@redhat.com
Signed-off-by: Poornima G pgurusid@redhat.com

Describe what this PR does
Provide some context for the reviewer

Is there anything that requires special attention?
Do you have any questions? Did you do something clever?

Related issues:
Mention any github issues relevant to this PR

@ghost ghost assigned poornimag Nov 25, 2018
@ghost ghost added the in progress label Nov 25, 2018
@centos-ci
Copy link
Copy Markdown
Collaborator

Can one of the admins verify this patch?

@Madhu-1
Copy link
Copy Markdown
Member

Madhu-1 commented Nov 26, 2018

add to whitelist

@Madhu-1
Copy link
Copy Markdown
Member

Madhu-1 commented Nov 26, 2018

@poornimag CI is failing

Comment thread pkg/glusterblock/Dockerfile Outdated

# Install dependencies
RUN yum -y install centos-release-gluster && \
yum -y install glusterfs-fuse kmod-xfs xfsdump xfsprogs dmapi -y && \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think there is a kmod-xfs or dmapi package in CentOS.

Comment thread pkg/glusterblock/Dockerfile Outdated
ARG builddate="(unknown)"

LABEL build-date="${builddate}"
LABEL io.k8s.description="FUSE-based CSI driver for Gluster file access"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"for Gluster file acces"? Both here and a few lines below.

@@ -0,0 +1,321 @@
package glusterblock
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing copyright notice.

@nixpanic
Copy link
Copy Markdown
Contributor

Is there a design or README for this gluster-block feature?

mountPath: /var/lib/csi/sockets/pluginproxy/

- name: glusterblock
image: docker.io/poornimag/glusterblock-csi-driver
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image name needs to be updated to use docker.io/gluster

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is applicable in all places

args:
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--resturl=$(REST_URL)"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as of now, we are not communicating with gd2. I think we can remove this argument

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is applicable in all places

fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://plugin/csi.sock
- name: REST_URL
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is also not required.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is applicable in all places

return nil, status.Error(codes.AlreadyExists, "block volume already exists")
}
fileName := hostPath + volumeName
_, err = os.Create(fileName)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before creating the new files, I think we need to check free spaces on the BHV.


if _, err = os.Stat(hostPath); os.IsNotExist(err) {
glog.Errorf("failed to create block volume as the block hosting path doesn't exist: %v", err)
return nil, err
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the error should be like
return nil, status.Error(codes.InvalidArgument, err)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is applicable in all places, we need to return applicable status codes while returning errors

Comment thread pkg/glusterblock/controllerserver.go Outdated
}
if _, err = os.Stat(hostPath + volumeName); !os.IsNotExist(err) {
glog.Errorf("block volume %s already exixts")
return nil, status.Error(codes.AlreadyExists, "block volume already exists")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if block volume already exists, we need to send back the success response

Comment thread pkg/glusterblock/controllerserver.go Outdated
return nil, err
}
if _, err = os.Stat(hostPath + volumeName); !os.IsNotExist(err) {
glog.Errorf("block volume %s already exixts")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo exixts to exists

Comment thread pkg/glusterblock/controllerserver.go Outdated
if err == nil {
deviceName := strings.Split(string(out), " \n")
device = deviceName[0]
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens when we get some error here?

hostPath := cs.Config.BlockHostPath
fileName := hostPath + volumeID
err := os.Remove(fileName)
if err != nil {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if fileName not found we need to return success response

@Madhu-1
Copy link
Copy Markdown
Member

Madhu-1 commented Dec 13, 2018

@poornimag any update on this PR?

@atinmu
Copy link
Copy Markdown

atinmu commented Dec 13, 2018

@poornimag Please have an issue filed and capture the same in the commit message. We'd like to get this targeted for GCS/0.5 release.

@Madhu-1 @JohnStrunk @humblec ^^

Copy link
Copy Markdown
Member

@aravindavk aravindavk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see BlockHost path is accepted as config parameter. Who will create the block hosting volume?

@Madhu-1
Copy link
Copy Markdown
Member

Madhu-1 commented Dec 13, 2018

@aravindavk earlier assumption was operator will create BHV and pass it as an argument during the CSI driver initialization

the design has changed so that CSI driver will take care of BHV. @poornimag will be updating PR with design doc.

@poornimag poornimag force-pushed the gluster-block-loopback branch from 46529dd to ea68c8f Compare December 15, 2018 06:42
@atinmu
Copy link
Copy Markdown

atinmu commented Dec 18, 2018

I don't see any progress on this PR yet. Please note we're targeting this PR for GCS/0.5. I'd also like to see an overall end to end consumption workflow based out of GCS stack for the loopback based rwo story somewhere, most probably in GCS wiki.

@poornimag poornimag force-pushed the gluster-block-loopback branch from ea68c8f to 030e9cd Compare December 21, 2018 13:06
This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.

The original patch was taken from [1]

[1] gluster/gluster-csi-driver@master...Madhu-1:lo-block

Co-Authored-by: Poornima G <pgurusid@redhat.com>
Signed-off-by: Poornima G <pgurusid@redhat.com>
@poornimag poornimag force-pushed the gluster-block-loopback branch from 030e9cd to 8010ccf Compare January 10, 2019 11:05
@amarts
Copy link
Copy Markdown
Member

amarts commented Jan 10, 2019

@poornimag is there any benchmark available for PV create time for this model? Would be great to see some graphs on PV create rate, and delete rate, may be upto 50 or 100 PVs? If yes, can you share it here, that would help people to understand the benefit of this approach.

@poornimag
Copy link
Copy Markdown
Author

Would

Did some testing with this patch on the scalability of the block devices backed by loopback file hosted on Gluster volume. Here is the link to setup steps and the scale testing results.

This patch shall be re-written to use latest gd2 capabilities, will add blog and more details once we get the newer version.

@JohnStrunk
Copy link
Copy Markdown
Member

3360 PVs in ~45 minutes, with a constant rate... very nice. 🎉

@JohnStrunk
Copy link
Copy Markdown
Member

This is being implemented via #150.
Closing.

@JohnStrunk JohnStrunk closed this Jan 16, 2019
@ghost ghost removed the in progress label Jan 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants