Oxbow File System adopts a coordinated architecture for multi-components across user, kernel, and device spaces. The file system is presented at the 20th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2026). Please find the paper here: Oxbow: A Coordinated Architecture for Multi-component File Systems.
Basically, Oxbow is designed to run with a computational storage device which has a SoC -- general-purpose CPU and dedicated memory -- in it. Since there are only a few experimental devices at this moment, we emulate the computational storage using a DPU (a smartNIC).
Additionally, you can try running Oxbow with Host-journaling mode that
executes device-side component on the host. With this mode, you can run Oxbow
without any specific hardware in a virtual machine with two -- real or emulated
with two files -- NVMe SSDs.
Please check Host-journaling for
Host-journaling setup.
Notice to OSDI 2026 AE reviewers: See the Artifact Evaluation document.
Some components of Oxbow have different names from those described in the paper. Check the names used in the source code below:
secure_daemon: Described asH-Serverin the paper.devfs: Described asD-Serverin the paper.libfs: Described asoxLibin the paper.
Software:
- Ubuntu 23.04
- Linux kernel v6.2.10 (Oxbow kernel)
Hardware:
- DPU (Data Processing Unit, BlueField-2) for computational storage emulation
- SR-IOV and namespace sharing support on SSD
- Ubuntu 22.04.3 LTS
- Kernel: 5.15.0-1021-bluefield
We assume that the RDMA connection between the host and the DPU has been configured correctly with the appropriate RDMA drivers.
git clone https://github.com/xlab-uiuc/oxbow.git
git submodule update --recursive --initgit clone https://github.com/xlab-uiuc/oxbow.git
cd oxbow
scripts/device/submodule_init.shIf you experience a hang during reboot with "Loading RAM disk" message, do
make oldconfigwith your working config instead.
Linux kernel source code is in oxbow/linux-kernel directory.
make x86_64_defconfigTo build kernel image for qemu:
make kvm_guest.config # If failed (in older kernel) do: make kvmconfigAdditionally, you can set/unset the following configurations via make menuconfig or directly modifying the .config file.
Debug configurations might be needed:
CONFIG_KGDB=y
CONFIG_DEBUG_INFO=y
CONFIG_GDB_SCRIPTS=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_RANDOMIZE_BASE=n # You can also disable it by adding 'nokaslr' kernel parameter.
CONFIG_USERFAULTFD=y # You can disable it. Not used in the recent version.Set the Oxbow configurations as below:
CONFIG_OXBOW_ILLUFS=y
# CONFIG_OXBOW_IPC_MSG_RING is not set
CONFIG_OXBOW_SHM_GUP_CACHE=y
# CONFIG_OXBOW_SHM_GUP_COUNTER is not set
CONFIG_OXBOW_ILLUFS_TASK_WRITE_FAULT_CONTEXT=ymake -jThe following kernel parameters are required:
intel_iommu=on: to set up nvme-of.memmap=ormem=: to reserve higher memory for nvme-of offload buffer. You can set proper value considering your machine's resource. Also, you have to modify the script file,scripts/host/nvmf-of/config_nvmf_server.shaccordingly.pci=realloc: to set up nvme-sriov.nvme_core.multipath=N: to set up nvme-sriov.
Add to the /etc/default/grub as an example below and sudo update-grub.
GRUB_CMDLINE_LINUX_DEFAULT="...<other params>... intel_iommu=on iommu=pt memmap=16G\\\$80G pci=realloc nvme_core.multipath=N"sudo make modules_install installRestart the machine with the installed kernel.
You can easily set up the machine using uFSBench_tmux.sh or
uFSBench_tmux_host-journaling.sh scripts. These scripts set environment
variables and set up the machine for CSD
emulation -- hence, you can skip 5. Setting environment variables and
6. Emulating Smart SSD with Smart NIC.
In a virtual machine, use uFSBench_qemu.sh instead -- you don't need to set up
a smart device. The scripts are useful for running benchmarks as well (See
artifact-evaluation document).
Load environment variables to use Oxbow:
cd oxbow
source set_env.shMost of the scripts require environment variables set properly. Check
set_env.sh file and set the correct values based on your system.
Especially, check the following variables:
export NVME_PCIE_ADDR='0000:d8:00.0' # SSD to be used by Oxbow.
export NVME_PCIE_ADDR_EXT4='0000:d8:00.0' # SSD to be used by Ext4.The following variables are set automatically but make sure that correct names are assigned.
export NVME_DEV_NAME=
export NVME_DEV_NAME_EXT4=*You can skip this section if you runs Oxbow in Host-journaling mode.
**Use the script, scripts/host/setup_machine_all.sh which do the steps
in this section.
Basic idea is that we will make the host and the device (SmartNIC) share a NVMe namespace. It is feasible if the followings are supported by hardware.
- SR-IOV support by SSD
- Namespace sharing by SSD
- NVMe-oF is supported by SSD and SmartNIC
We create a VF (Virtual Function, a.k.a. secondary controller) and attach both the PF (Physical function, a.k.a. primary controller) and the VF to the same namespace. The host (Secure Daemon) accesses the namespace via VF using SPDK library and the SmartNIC (DevFS) accesses via PF over NVMe-oF connection.
* The order of configurations below is important. The first step reloads
nvme driver and it resets all related configuration. It can cause the device
to enter an abnormal state. Additionally, setup.sh script of SPDK also changes
a driver from nvme to vfio-pci which can change the mapping between PCIe
addresses and device names.
- Set up NVMe-oF server (host) using PF and connect the client (SmartNIC).
- Set up VF.
- Set up SPDK to use the VF.
Refer to the Documentation/nvmf.md.
scripts/host/setup_sriov_vf.shMake sure that NVME_DEV_NAME has been parsed correctly in the script,
scripts/host/nvme-sriov/setup_vf.sh, which is called in setup_sriov_vf.sh.
This script parses NVME_DEV_NAME assuming that your SSD is PM1735. If you
are using other device, you have to set the proper name in the script.
Refer to the scripts/host/nvme-sriov/README.me for the details.
Set pcie_nvme_addr to the PCIe address of the created VF in
oxbow/secure_daemon/secure_daemon_conf.sh. For example, the first VF's is
0000:d8:00.1 (where 0000:d8:00.0 is PF's).
$ lspci
...
d8:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM173X
d8:00.1 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM173X
d8:00.2 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM173XRun scripts/host/setup_spdk.sh to set up huge pages and bind a driver for SPDK
library. -r option resets the configuration.
There are three user-level components,
that run on the host:
- Secure Daemon: User-level daemon (a file server process).
- LibFS: Library for applications.
that runs on the smart device:
- DevFS: In-device daemon.
cd oxbow/libfs/lib
./install.shcd oxbow/libfs
./build.sh
# Or, to rebuild:
# ./build.sh re(Currently, Oxbow supports only lwext4 file system. sefs is not tested.)
Select a file system by changing the followings:
filesystemvariable in theoxbow/secure_daemon/meson.buildfilefilesystemvariable in theoxbow/secure_daemon/secure_daemon_conf.shfile
Note that you also need to change the following when compiling DevFS:
filesystemvariable in theoxbow/devfs/meson.buildfile
And build it.
cd oxbow/secure_daemon
./build.sh
# Or, to rebuild:
# ./build.sh reRefer to oxbow/devfs/README.md.
Set proper values in oxbow/secure_daemon/secure_daemon_conf.sh.
Note that you can create an untracked configuration file,
oxbow/secure_daemon/myconf.sh, that overwrites the configurations in
secure_daemon_conf.sh.
An example of myconf.sh:
#!/bin/bash
# Configs in this file overwrites configs in secure_daemon_conf.sh
export pcie_nvme_addr="0000:XX:00.1 0000:XX:00.2 0000:XX:00.3"
# export rpc_rdma_ip_addr="192.168.14.113" # host address for host journaling
export rpc_rdma_ip_addr="192.168.14.114" # device address for device journalingIt is required to set the proper values for pcie_nvme_addr and
rpc_rdma_ip_addr to run Secure Daemon.
Refer to oxbow/devfs/README.md.
There is a file oxbow/libfs/libfs_conf.sh for the libfs configuration, but you
don't need to modify it.
# Do mkfs with this script. (The script assumes SR-IOV setup.)
scripts/host/mkfs.shOr, do mkfs manually: sudo build/sefs_mkfs /dev/nvmeXnY.
You need at least three terminals. Using tmux is recommended -- you can launch
the required session with uFSBench_tmux.sh or
uFSBench_tmux_host-journaling.sh. Refer to
deploy-on-testbed.md as well.
The order matters: start DevFS first then Secure Daemon.
Refer to oxbow/devfs/README.md.
cd oxbow/secure_daemon
./run.sh
##############################
### Ready to use Oxbow FS. ###
##############################
## You can exit daemon by pressing ctrl+c.
#
## Make sure to umount /oxbow after terminating Secure Daemon.
sudo umount /oxbow## check kernel log (optional)
$ sudo dmesg --followRun a test program.
cd oxbow/libfs
# Run program (e.g. benchmark)
./run.sh build/test/file_basicRefer to Documentation/bench.md.
Refer to Documentation/fs-config.md.
NVMe device is not probed. setup.sh script of SPDK prompts as below.
0000:d9:00.0 (144d a80a): Active devices: data@nvme1n1, so not binding PCI devYou have to wipe out the partition table from the device. wipefs command will do.
sudo wipefs -a <device_name: ex) /dev/nvme1n0>The device should not be displayed with blkid.
NVMe device is not attached. An example of error message is as below.
...
16:12:28 DEBUG ../src/common/storage_engine/se_nvme.c:61: Attaching to 0000:d8:00.0
[2023-08-21 16:12:28.591811] pci.c:1016:spdk_pci_device_claim: *ERROR*: could not open /var/tmp/spdk_pci_lock_0000:d8:00.0
[2023-08-21 16:12:28.591835] nvme_pcie.c: 888:nvme_pcie_ctrlr_construct: *ERROR*: could not claim device 0000:d8:00.0 (Permission denied)
[2023-08-21 16:12:28.591846] nvme.c: 677:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 0000:d8:00.0
EAL: Requested device 0000:d8:00.0 cannot be used
...Use SPDK's setup.sh script to clean up the device.
cd lib/spdk
sudo scripts/setup.sh cleanupFor example, when you run the script, scripts/host/setup_spdk.sh, you will see
the following messages for the correct PCIe address.
sudo PCI_ALLOWED=0000:00:XX.0 /home/yulistic/data/oxbow.code/oxbow/libfs/lib/spdk/scripts/setup.sh
0000:00:XX.0 (c0a9 5415): Active devices: data@nvme0n1, so not binding PCI dev
INFO: Requested 1024 hugepages but 4096 already allocated on node0Unbind the device from the driver manually with the following command.
# As a root account:
echo 0000:00:XX.0 > /sys/bus/pci/drivers/nvme/unbind
# Or,
# Using sudo:
sudo bash -c "echo 0000:00:XX.0 > /sys/bus/pci/drivers/nvme/unbind"