Skip to content

Conversation

@kotreshhr
Copy link
Contributor

This patch intends to add glusterblock csi driver which
exports a loopback device formatted as xfs. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.

The original patch was taken from [1]

[1] master...Madhu-1:lo-block

Co-Authored-by: Poornima G pgurusid@redhat.com
Co-Authored-by: Kotresh HR khiremat@redhat.com
Signed-off-by: Poornima G pgurusid@redhat.com

Describe what this PR does
Provide some context for the reviewer

Is there anything that requires special attention?
Do you have any questions? Did you do something clever?

Related issues:
Mention any github issues relevant to this PR

@centos-ci
Copy link
Collaborator

Can one of the admins verify this patch?

@Madhu-1
Copy link
Member

Madhu-1 commented Jan 16, 2019

add to whitelist

"fmt"
"os"

gfd "github.com/gluster/gluster-csi-driver/pkg/glusterblock"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gfd (gluster file driver) to fbd (gluster block driver)

can we rename glusterblock as glusterloopback

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we call it 'glustervirtualblock' (gvb) instead? I am thinking of avoiding loop from code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


cmd := &cobra.Command{
Use: "glusterblock-csi-driver",
Short: "GlusterFS Block CSI driver",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GlusterFS Block CSI driver or Gluster loop back CSI driver?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

glusterblock-csi-driver to glusteloopback-csi-driver ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, as said, lets avoid 'loop', and fine to call 'virtual'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


gfd "github.com/gluster/gluster-csi-driver/pkg/glusterfs"
"github.com/gluster/gluster-csi-driver/pkg/glusterfs/utils"
"github.com/gluster/gluster-csi-driver/pkg/glusterfs/config"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what the intention behind renaming this folder name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were common code between glusterfs and glusterblock and were moved under "pkg/utils/common-utils.go" with package name as "utils". Hence moved config.go into respective config/config.go with package name "config"

@@ -0,0 +1,295 @@
# Copyright 2018 The Gluster CSI Authors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this template needs to be updated to use the latest sidecar CSI images

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO create below directory struct

example/kubernets/glusterfs/ ---> for glsuterfs
example/kubernetes/glsuter-loopback/ ----> for loopback

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,87 @@
# Copyright 2018 The Gluster CSI Authors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use single dockerfile for both, i feel a lot of code duplication here, can we make use of arguments during docker build?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dpendencies are different for each file. It will be messy with if blocks. I feel having it separate is cleaner.

}

// NodeGetCapabilities returns the supported capabilities of the node server
func (ns *NodeServer) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this need to be updated check #149

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -0,0 +1,17 @@
package config
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be moved to common place to avoid code duplication

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

config of glusterfs csi and glusterblock csi is little different hence kept separate.


// CreateSnapshot create snapshot of an existing PV
func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) {
func (cs *ControllerServer) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (*csi.CreateSnapshotResponse, error) { //nolint: gocyclo
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

//nolint: gocyclo needs to be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}

// MountVolume mounts the given source at the target
func MountVolume(srcPath string, targetPath string, fstype string, mo []string) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this be used for glusterfs driver also?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently not being used but yes, it can be used.

}

// Config struct fills the parameters of request or user input
type Config struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not required i think (avoid code duplication)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@JohnStrunk
Copy link
Member

Does this supersede #105?

@amarts
Copy link
Member

amarts commented Jan 16, 2019

Does this supersede #105?

Yes, changed due to gluster/glusterd2#1439

securityContext:
privileged: true
capabilities:
add: ["CAP_MKNOD"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why provisioner need these capabilities? ( AFAIK this does only create and delete volume)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Done

}

// ParseCreateVolRequest parse incoming volume create request and fill ProvisionerConfig.
func (cs *ControllerServer) ParseCreateVolRequest(req *csi.CreateVolumeRequest) (*ProvisionerConfig, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we remove this function as it's not doing any functionality?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and remove this one also

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


// CreateVolume creates and starts the volume
func (cs *ControllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
glog.V(2).Infof("request received %+v", req)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to remove logging for secrets from the request, please check #148

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

volumeReq := bapi.BlockVolumeCreateRequest{
BlockVolumeInfo: &blockVolInfo,
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one more validation is missing you need to reject volume creation from snapshot here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


const (
glusterBlockCSIDriverName = "org.gluster.glustervirtblock"
glusterBlockCSIDriverVersion = "0.0.9"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to update driver version to 1.0.0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@Madhu-1
Copy link
Member

Madhu-1 commented Jan 21, 2019

@JohnStrunk @humblec PTAL

/profile.cov >profile.cov
fi

DRIVER="glustervirtblock-csi-driver"
Copy link
Contributor

@humblec humblec Jan 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The driver name looks bit confusing to me, if we meant this is a virtual block driver we could work on better names. Virt block Vs virtual block , I prefer latter one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in gcs channel, keeping it as glustervirtblock.

)

func init() {
// #nosec
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is #nosec

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an annotation to skip golint warning as error check is not done for now.


cmd := &cobra.Command{
Use: "gluster-virtual-block-csi-driver",
Short: "GlusterFS Virtual Block CSI driver",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no point in keeping GlusterFS string here, Gluster fit better

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

// ProvisionerConfig is the combined configuration of gluster cli vol create request and CSI driver specific input
type ProvisionerConfig struct {
gdVolReq *api.VolCreateReq
//csiConf *CsiDrvParam
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be more apt to have block volume request param

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't get it, could you elaborate?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove this structure, this is not needed at all

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@humblec
Copy link
Contributor

humblec commented Jan 21, 2019

@kotreshhr @poornimag do we have any design doc which we can point on block driver ? if yes, please put a pointer here in this PR for reference.

"os"

gvb "github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock"
"github.com/gluster/gluster-csi-driver/pkg/gluster-virtblock/config"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel , either virtualblock or vb suite better than virtblock.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also considering glusterfs and gluster vb driver use same config params of a GD2 cluster, config an outside library for both to consume once its imported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will do this as part code refactor patch.

- name: CSI_ENDPOINT
value: unix://plugin/csi.sock
- name: REST_URL
value: http://glusterd2-client.gcs:24007
Copy link
Contributor

@humblec humblec Jan 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whats this url all about ? Is it a constant value for any GD2 cluster?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is just like the url in glusterfs-csi deployment file [1]. If it's deployed manually, this has to be changed which will be mentioned in README. If the deployment is part of gcs, gcs will take care of updating this.

[1]

value: http://192.168.121.182:24007

if !found {
source := gs + ":" + volume
hostPath = path.Join(ns.Config.MntPathPrefix, volume)
err = utils.MountVolume(source, hostPath, "glusterfs", nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the permissions of this volume ? how do we make sure the acl is properly configured between different app pod mounts ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, it's mounted with default options. And with it host volume mount is accessible to different pods as per testing.

if err != nil {
return nil, err
}
ns.Config.Mounts[volume] = hostPath
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we ensure the reference counts ? also how do we make sure there is no (stale) supervol mount exist when reference count of block volumes on it is "0".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic of creating multiple block hosting volume is in gd2. If it can't be created in first one, second one is created. Volume context will return the correct block hosting volume and is being used. The actual deletion of block volume context, @poornimag or @oshankkumar would better answer that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what humble meant was if you delete all block volumes why you want to keep the mount of BHV on node-plugin?

@humblec if you have multiple BHV (maybe we will end up in stale BHV mounts) that can be fixed later

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah its not ref counted yet. There will be stale host volume mounts if all the blocks are deleted. Yeah it can be sent as an incremental path.

@humblec
Copy link
Contributor

humblec commented Jan 21, 2019

@kotreshhr @poornimag we also need a README for sure :)

@kotreshhr kotreshhr force-pushed the loopback-csi-driver branch 3 times, most recently from cbfdedc to 7179dc4 Compare January 24, 2019 06:41
"pkg/util/feature"
"pkg/util/feature",
]
pruneopts = "NUT"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kotreshhr which dep version are you using?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[root@kotresh gluster-csi-driver $]dep version
dep:
version : v0.5.0
build date : 2018-07-26
git hash : 224a564
go version : go1.10.3
go compiler : gc
platform : linux/amd64
features : ImportDuringSolve=false

README.md Outdated
### 1. RW0 Volume Claim (Gluster Virtual Block CSI driver)

```
[root@localhost]#kubectl create -f csi-deployment.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in which directory csi-deployment.yaml is present please update

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.md Outdated
metadata:
name: glustervirtblock-csi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to make this storage class as default one?

@humblec @JohnStrunk ^^

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can't have two default storage class in the same cluster, please verify this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.md Outdated
local-storage kubernetes.io/no-provisioner 29h
```

Unset glusterfs-csi as default CSI driver if it's default
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, this is not required please make gluster-block storage class as non default one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

name: glusterblock-csi-pv
annotations:
storageClassName: glustervirtblock-csi
spec:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of storageClassName in annotation, use spec.StorageClassName check https://github.com/gluster/gluster-csi-driver/pull/154/files#diff-04c6e90faac2675aa89e2176d2eec7d8R119

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

requests:
storage: 100Mi

[root@localhost]# kubectl create -f pvc.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cat and create should be in separate block

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.md Outdated
spec:
containers:
- name: smallfile
image: quay.io/ekuric/smallfile:latest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what this container image does, cant we use existing redis image?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.md Outdated
persistentVolumeClaim:
claimName: glusterblock-csi-pv

[root@localhost cluster]#kubectl create -f app.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please keep [root@localhost] constant across readme

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

README.md Outdated
NAME READY STATUS RESTARTS AGE
gluster-0 1/1 Running 0 38s

[root@localhost]# kubectl describe pods/gluster-0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of describe just get the kubectl get po if the pod is in running state than its fine

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@kotreshhr kotreshhr force-pushed the loopback-csi-driver branch from 7179dc4 to 3712dda Compare January 24, 2019 10:48
This patch intends to add gluster-virtblock csi driver which
exports a block device provided by gluster. The current
implementaion expects a mount point as a parameter, where
the block devices will be created.

Co-Authored-by: Poornima G <pgurusid@redhat.com>
Co-Authored-by: Kotresh HR <khiremat@redhat.com>
Signed-off-by: Kotresh HR <khiremat@redhat.com>
@kotreshhr kotreshhr force-pushed the loopback-csi-driver branch from 3712dda to 0e1f281 Compare January 25, 2019 06:45
Copy link
Member

@amarts amarts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I personally think, we should go ahead and get this moving, and make further improvement as followup PRs

Copy link
Member

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, need to do code cleanup once this PR is merged.

@ghost ghost assigned Madhu-1 Jan 28, 2019
Copy link
Member

@JohnStrunk JohnStrunk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with this. There will need to be a follow-up to clean up the build process.

@JohnStrunk JohnStrunk merged commit acec1ee into gluster:master Jan 28, 2019
@ghost ghost removed the in progress label Jan 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants