-
Notifications
You must be signed in to change notification settings - Fork 34
Add CSI driver for gluster block #150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
|
add to whitelist |
cmd/glusterblock/main.go
Outdated
| "fmt" | ||
| "os" | ||
|
|
||
| gfd "github.com/gluster/gluster-csi-driver/pkg/glusterblock" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gfd (gluster file driver) to fbd (gluster block driver)
can we rename glusterblock as glusterloopback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we call it 'glustervirtualblock' (gvb) instead? I am thinking of avoiding loop from code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
cmd/glusterblock/main.go
Outdated
|
|
||
| cmd := &cobra.Command{ | ||
| Use: "glusterblock-csi-driver", | ||
| Short: "GlusterFS Block CSI driver", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GlusterFS Block CSI driver or Gluster loop back CSI driver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
glusterblock-csi-driver to glusteloopback-csi-driver ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again, as said, lets avoid 'loop', and fine to call 'virtual'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
|
||
| gfd "github.com/gluster/gluster-csi-driver/pkg/glusterfs" | ||
| "github.com/gluster/gluster-csi-driver/pkg/glusterfs/utils" | ||
| "github.com/gluster/gluster-csi-driver/pkg/glusterfs/config" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what the intention behind renaming this folder name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were common code between glusterfs and glusterblock and were moved under "pkg/utils/common-utils.go" with package name as "utils". Hence moved config.go into respective config/config.go with package name "config"
| @@ -0,0 +1,295 @@ | |||
| # Copyright 2018 The Gluster CSI Authors. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this template needs to be updated to use the latest sidecar CSI images
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO create below directory struct
example/kubernets/glusterfs/ ---> for glsuterfs
example/kubernetes/glsuter-loopback/ ----> for loopback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| @@ -0,0 +1,87 @@ | |||
| # Copyright 2018 The Gluster CSI Authors. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use single dockerfile for both, i feel a lot of code duplication here, can we make use of arguments during docker build?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dpendencies are different for each file. It will be messy with if blocks. I feel having it separate is cleaner.
| } | ||
|
|
||
| // NodeGetCapabilities returns the supported capabilities of the node server | ||
| func (ns *NodeServer) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this need to be updated check #149
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| @@ -0,0 +1,17 @@ | |||
| package config | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be moved to common place to avoid code duplication
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config of glusterfs csi and glusterblock csi is little different hence kept separate.
pkg/glusterfs/controllerserver.go
Outdated
|
|
||
| // CreateSnapshot create snapshot of an existing PV | ||
| func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) { | ||
| func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) { //nolint: gocyclo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
//nolint: gocyclo needs to be removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| } | ||
|
|
||
| // MountVolume mounts the given source at the target | ||
| func MountVolume(srcPath string, targetPath string, fstype string, mo []string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this be used for glusterfs driver also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently not being used but yes, it can be used.
pkg/utils/common-utils.go
Outdated
| } | ||
|
|
||
| // Config struct fills the parameters of request or user input | ||
| type Config struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not required i think (avoid code duplication)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
Does this supersede #105? |
Yes, changed due to gluster/glusterd2#1439 |
65a1cbc to
60beba9
Compare
| securityContext: | ||
| privileged: true | ||
| capabilities: | ||
| add: ["CAP_MKNOD"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why provisioner need these capabilities? ( AFAIK this does only create and delete volume)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Done
| } | ||
|
|
||
| // ParseCreateVolRequest parse incoming volume create request and fill ProvisionerConfig. | ||
| func (cs *ControllerServer) ParseCreateVolRequest(req *csi.CreateVolumeRequest) (*ProvisionerConfig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we remove this function as it's not doing any functionality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this one also
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
|
||
| // CreateVolume creates and starts the volume | ||
| func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) { | ||
| glog.V(2).Infof("request received %+v", req) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you need to remove logging for secrets from the request, please check #148
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| volumeReq := bapi.BlockVolumeCreateRequest{ | ||
| BlockVolumeInfo: &blockVolInfo, | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think one more validation is missing you need to reject volume creation from snapshot here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
pkg/gluster-virtblock/driver.go
Outdated
|
|
||
| const ( | ||
| glusterBlockCSIDriverName = "org.gluster.glustervirtblock" | ||
| glusterBlockCSIDriverVersion = "0.0.9" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to update driver version to 1.0.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
@JohnStrunk @humblec PTAL |
| /profile.cov >profile.cov | ||
| fi | ||
|
|
||
| DRIVER="glustervirtblock-csi-driver" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The driver name looks bit confusing to me, if we meant this is a virtual block driver we could work on better names. Virt block Vs virtual block , I prefer latter one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in gcs channel, keeping it as glustervirtblock.
| ) | ||
|
|
||
| func init() { | ||
| // #nosec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is #nosec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an annotation to skip golint warning as error check is not done for now.
cmd/gluster-virtblock/main.go
Outdated
|
|
||
| cmd := &cobra.Command{ | ||
| Use: "gluster-virtual-block-csi-driver", | ||
| Short: "GlusterFS Virtual Block CSI driver", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no point in keeping GlusterFS string here, Gluster fit better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| // ProvisionerConfig is the combined configuration of gluster cli vol create request and CSI driver specific input | ||
| type ProvisionerConfig struct { | ||
| gdVolReq *api.VolCreateReq | ||
| //csiConf *CsiDrvParam |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
may be more apt to have block volume request param
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't get it, could you elaborate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove this structure, this is not needed at all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
@kotreshhr @poornimag do we have any design doc which we can point on block driver ? if yes, please put a pointer here in this PR for reference. |
| "os" | ||
|
|
||
| gvb "github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock" | ||
| "github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock/config" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel , either virtualblock or vb suite better than virtblock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also considering glusterfs and gluster vb driver use same config params of a GD2 cluster, config an outside library for both to consume once its imported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will do this as part code refactor patch.
| - name: CSI_ENDPOINT | ||
| value: unix://plugin/csi.sock | ||
| - name: REST_URL | ||
| value: http://glusterd2-client.gcs:24007 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whats this url all about ? Is it a constant value for any GD2 cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is just like the url in glusterfs-csi deployment file [1]. If it's deployed manually, this has to be changed which will be mentioned in README. If the deployment is part of gcs, gcs will take care of updating this.
[1]
| value: http://192.168.121.182:24007 |
| if !found { | ||
| source := gs + ":" + volume | ||
| hostPath = path.Join(ns.Config.MntPathPrefix, volume) | ||
| err = utils.MountVolume(source, hostPath, "glusterfs", nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the permissions of this volume ? how do we make sure the acl is properly configured between different app pod mounts ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, it's mounted with default options. And with it host volume mount is accessible to different pods as per testing.
| if err != nil { | ||
| return nil, err | ||
| } | ||
| ns.Config.Mounts[volume] = hostPath |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we ensure the reference counts ? also how do we make sure there is no (stale) supervol mount exist when reference count of block volumes on it is "0".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic of creating multiple block hosting volume is in gd2. If it can't be created in first one, second one is created. Volume context will return the correct block hosting volume and is being used. The actual deletion of block volume context, @poornimag or @oshankkumar would better answer that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what humble meant was if you delete all block volumes why you want to keep the mount of BHV on node-plugin?
@humblec if you have multiple BHV (maybe we will end up in stale BHV mounts) that can be fixed later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah its not ref counted yet. There will be stale host volume mounts if all the blocks are deleted. Yeah it can be sent as an incremental path.
|
@kotreshhr @poornimag we also need a README for sure :) |
cbfdedc to
7179dc4
Compare
| "pkg/util/feature" | ||
| "pkg/util/feature", | ||
| ] | ||
| pruneopts = "NUT" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kotreshhr which dep version are you using?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[root@kotresh gluster-csi-driver $]dep version
dep:
version : v0.5.0
build date : 2018-07-26
git hash : 224a564
go version : go1.10.3
go compiler : gc
platform : linux/amd64
features : ImportDuringSolve=false
README.md
Outdated
| ### 1. RW0 Volume Claim (Gluster Virtual Block CSI driver) | ||
|
|
||
| ``` | ||
| [root@localhost]#kubectl create -f csi-deployment.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in which directory csi-deployment.yaml is present please update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
README.md
Outdated
| metadata: | ||
| name: glustervirtblock-csi | ||
| annotations: | ||
| storageclass.kubernetes.io/is-default-class: "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to make this storage class as default one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can't have two default storage class in the same cluster, please verify this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
README.md
Outdated
| local-storage kubernetes.io/no-provisioner 29h | ||
| ``` | ||
|
|
||
| Unset glusterfs-csi as default CSI driver if it's default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, this is not required please make gluster-block storage class as non default one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| name: glusterblock-csi-pv | ||
| annotations: | ||
| storageClassName: glustervirtblock-csi | ||
| spec: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of storageClassName in annotation, use spec.StorageClassName check https://github.com/gluster/gluster-csi-driver/pull/154/files#diff-04c6e90faac2675aa89e2176d2eec7d8R119
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| requests: | ||
| storage: 100Mi | ||
|
|
||
| [root@localhost]# kubectl create -f pvc.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cat and create should be in separate block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
README.md
Outdated
| spec: | ||
| containers: | ||
| - name: smallfile | ||
| image: quay.io/ekuric/smallfile:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what this container image does, cant we use existing redis image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
README.md
Outdated
| persistentVolumeClaim: | ||
| claimName: glusterblock-csi-pv | ||
|
|
||
| [root@localhost cluster]#kubectl create -f app.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please keep [root@localhost] constant across readme
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
README.md
Outdated
| NAME READY STATUS RESTARTS AGE | ||
| gluster-0 1/1 Running 0 38s | ||
|
|
||
| [root@localhost]# kubectl describe pods/gluster-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of describe just get the kubectl get po if the pod is in running state than its fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
7179dc4 to
3712dda
Compare
This patch intends to add gluster-virtblock csi driver which exports a block device provided by gluster. The current implementaion expects a mount point as a parameter, where the block devices will be created. Co-Authored-by: Poornima G <pgurusid@redhat.com> Co-Authored-by: Kotresh HR <khiremat@redhat.com> Signed-off-by: Kotresh HR <khiremat@redhat.com>
3712dda to
0e1f281
Compare
amarts
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I personally think, we should go ahead and get this moving, and make further improvement as followup PRs
Madhu-1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, need to do code cleanup once this PR is merged.
JohnStrunk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with this. There will need to be a follow-up to clean up the build process.
This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.
The original patch was taken from [1]
[1] master...Madhu-1:lo-block
Co-Authored-by: Poornima G pgurusid@redhat.com
Co-Authored-by: Kotresh HR khiremat@redhat.com
Signed-off-by: Poornima G pgurusid@redhat.com
Describe what this PR does
Provide some context for the reviewer
Is there anything that requires special attention?
Do you have any questions? Did you do something clever?
Related issues:
Mention any github issues relevant to this PR