Current RDMA implementation of GiantVM (linux/arch/x86/kvm/krdma.c) using a normal RDMA wrapper function that normal linux kernel provided.
For example, the krdma_send function is implemented like this:
int krdma_send(struct krdma_cb *cb, const char *buffer, size_t length,
unsigned long flag, const tx_add_t *tx_add)
{
struct ib_send_wr *bad_wr;
mutex_lock(&cb->slock);
...
cb->send_trans_buf[slot].send_sge.length = length + (sizeof(tx_add_t) - sizeof(imm_t));
cb->send_trans_buf[slot].sq_wr.wr_id = tx_add->txid;
cb->send_trans_buf[slot].sq_wr.ex.imm_data = htonl(*(const uint32_t*)tx_add);
memcpy(cb->send_trans_buf[slot].send_buf + length, (((const char *)tx_add) + sizeof(imm_t)),
sizeof(tx_add_t) - sizeof(imm_t));
memcpy(cb->send_trans_buf[slot].send_buf, buffer, length);
ret = ib_post_send(cb->qp, &cb->send_trans_buf[slot].sq_wr, &bad_wr);
...
In this code, ib_post_send is defined at linux/include/rdma/ib_verbs.h, and the normal kernel RDMA implementations look like support Soft-RoCE. To be more specific, kernel implementation doesn't care what kind of device and device drivers are actually installed, they just work as wrapper functions.

GiantVM looks like using ib_core stack, so theoretically, there's no problem that using GiantVM on Soft-RoCE.
So here is my test plan:
- Create two normal QEMU instances.
- Connect them with ethernet, and install GiantVM kernel.
- In QEMU instance, run GiantVM's QEMU
- If Soft-RoCE support is perfect, it works without any problem.
- If some problem occur, I'll report it on this issue.
And for other RDMA protocols like iWARP, they depend on TCP/IP, so it's not fit our model. I think those of protocols are for user, not kernel. Cisco report that in iWARP, remote R/W has almost same latency as local memory, but they experiments are based on 100Gbps ethernet card so I think it's not feasible in 10Gbps or 1Gbps ethernet card.

But testing GiantVM on other RDMA protocols sounds fun and interesting, so maybe it can be our next step: Benchmark GiantVM on variance RDMA protocols.
References:
- HowTo Configure Soft-RoCE
- CONFIGURING SOFT-ROCE
- Implementing RDMA on Linux
- SoftiWARP Performance with Chelsio 100GbE
- RDMA 101 - Buiding virtual setup
- It use Soft-iWARP, but it seems many of step are overlap with Soft-RoCE.
Current RDMA implementation of GiantVM (
linux/arch/x86/kvm/krdma.c) using a normal RDMA wrapper function that normal linux kernel provided.For example, the
krdma_sendfunction is implemented like this:In this code,
ib_post_sendis defined atlinux/include/rdma/ib_verbs.h, and the normal kernel RDMA implementations look like support Soft-RoCE. To be more specific, kernel implementation doesn't care what kind of device and device drivers are actually installed, they just work as wrapper functions.GiantVM looks like using ib_core stack, so theoretically, there's no problem that using GiantVM on Soft-RoCE.
So here is my test plan:
And for other RDMA protocols like iWARP, they depend on TCP/IP, so it's not fit our model. I think those of protocols are for user, not kernel. Cisco report that in iWARP, remote R/W has almost same latency as local memory, but they experiments are based on 100Gbps ethernet card so I think it's not feasible in 10Gbps or 1Gbps ethernet card.
But testing GiantVM on other RDMA protocols sounds fun and interesting, so maybe it can be our next step: Benchmark GiantVM on variance RDMA protocols.
References: