Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
38c6eb3
crypto: hisilicon - qm updates BAR configuration
Oct 30, 2025
0c7d382
hisi_acc_vfio_pci: adapt to new migration configuration
Oct 30, 2025
449e051
vfio/nvgrace-gpu: fix grammatical error
Aug 14, 2025
897cefa
vfio: Provide a get_region_info op
jgunthorpe Nov 7, 2025
6b97c1b
vfio/hisi: Convert to the get_region_info op
jgunthorpe Nov 7, 2025
fad0d0d
vfio/virtio: Convert to the get_region_info op
jgunthorpe Nov 7, 2025
7026227
vfio/nvgrace: Convert to the get_region_info op
jgunthorpe Nov 7, 2025
e54b8e0
vfio/pci: Fill in the missing get_region_info ops
jgunthorpe Nov 7, 2025
4df2081
vfio/mtty: Provide a get_region_info op
jgunthorpe Nov 7, 2025
0fbfd73
vfio/mdpy: Provide a get_region_info op
jgunthorpe Nov 7, 2025
554dca9
vfio/mbochs: Provide a get_region_info op
jgunthorpe Nov 7, 2025
073f13c
vfio/platform: Provide a get_region_info op
jgunthorpe Nov 7, 2025
8ba94bf
vfio/fsl: Provide a get_region_info op
jgunthorpe Nov 7, 2025
619333d
vfio/cdx: Provide a get_region_info op
jgunthorpe Nov 7, 2025
76b5171
vfio/ccw: Provide a get_region_info op
jgunthorpe Nov 7, 2025
6c250ce
vfio/gvt: Provide a get_region_info op
jgunthorpe Nov 7, 2025
e7da106
vfio: Require drivers to implement get_region_info
jgunthorpe Nov 7, 2025
7dd77b8
vfio: Add get_region_info_caps op
jgunthorpe Nov 7, 2025
29e1217
vfio/mbochs: Convert mbochs to use vfio_info_add_capability()
jgunthorpe Nov 7, 2025
0282af0
vfio/gvt: Convert to get_region_info_caps
jgunthorpe Nov 7, 2025
bc1c993
vfio/ccw: Convert to get_region_info_caps
jgunthorpe Nov 7, 2025
2bf5a2c
vfio/pci: Convert all PCI drivers to get_region_info_caps
jgunthorpe Nov 7, 2025
c0ad388
vfio/platform: Convert to get_region_info_caps
jgunthorpe Nov 7, 2025
2108575
vfio: Move the remaining drivers to get_region_info_caps
jgunthorpe Nov 7, 2025
54d50bb
vfio: Remove the get_region_info op
jgunthorpe Nov 7, 2025
fd317b8
NVIDIA: VR: SAUCE: cxl: Add cxl_get_hdm_info() for HDM decoder metadata
mmhonap Apr 1, 2026
e02c1b7
NVIDIA: VR: SAUCE: cxl: Declare cxl_find_regblock and cxl_probe_compo…
mmhonap Apr 1, 2026
199d5d2
NVIDIA: VR: SAUCE: cxl: Move component/HDM register defines to uapi/c…
mmhonap Apr 1, 2026
d0fde98
NVIDIA: VR: SAUCE: cxl: Split cxl_await_range_active() from media-rea…
mmhonap Apr 1, 2026
d314145
NVIDIA: VR: SAUCE: cxl: Record BIR and BAR offset in cxl_register_map
mmhonap Apr 1, 2026
05c1da9
NVIDIA: VR: SAUCE: vfio: UAPI for CXL-capable PCI device assignment
mmhonap Apr 1, 2026
de3e1a6
NVIDIA: VR: SAUCE: vfio/pci: Add CXL state to vfio_pci_core_device
mmhonap Apr 1, 2026
cb87876
NVIDIA: VR: SAUCE: vfio/pci: Add CONFIG_VFIO_CXL_CORE and stub CXL hooks
mmhonap Apr 1, 2026
84fbfbc
NVIDIA: VR: SAUCE: vfio/cxl: Detect CXL DVSEC and probe HDM block
mmhonap Apr 1, 2026
0fbd7b2
NVIDIA: VR: SAUCE: vfio/pci: Export config access helpers
mmhonap Apr 1, 2026
ad39798
NVIDIA: VR: SAUCE: vfio/cxl: Introduce HDM decoder register emulation…
mmhonap Apr 1, 2026
d64c61c
NVIDIA: VR: SAUCE: vfio/cxl: Wait for HDM ranges and create memdev
mmhonap Apr 1, 2026
fb580ac
NVIDIA: VR: SAUCE: vfio/cxl: CXL region management support
mmhonap Apr 1, 2026
05b9195
NVIDIA: VR: SAUCE: vfio/cxl: DPA VFIO region with demand fault mmap a…
mmhonap Apr 1, 2026
1447b99
NVIDIA: VR: SAUCE: vfio/cxl: Virtualize CXL DVSEC config writes
mmhonap Apr 1, 2026
24dd667
NVIDIA: VR: SAUCE: vfio/cxl: Register regions with VFIO layer
mmhonap Apr 1, 2026
534faac
NVIDIA: VR: SAUCE: vfio/pci: Advertise CXL cap and sparse component B…
mmhonap Apr 1, 2026
5bc0b3e
NVIDIA: VR: SAUCE: vfio/cxl: Provide opt-out for CXL feature
mmhonap Apr 1, 2026
646f12a
NVIDIA: VR: SAUCE: docs: vfio-pci: Document CXL Type-2 device passthr…
mmhonap Apr 1, 2026
d535328
NVIDIA: VR: SAUCE: cxl: Export the CXL reset helpers for VFIO users
mmhonap Apr 30, 2026
e5183b4
NVIDIA: VR: SAUCE: vfio/pci: Wire CXL DPA reset handling
mmhonap Apr 30, 2026
8c92d19
NVIDIA: VR: SAUCE: vfio/cxl: Ensure PCI Memory Space is enabled befor…
mmhonap Apr 29, 2026
15ef3e9
NVIDIA: VR: SAUCE: vfio/cxl: preserve HDM decoder base addresses acro…
mmhonap Apr 29, 2026
37fca85
NVIDIA: VR: SAUCE: vfio/cxl: virtualize DVSEC STATUS2 register in vco…
mmhonap Apr 29, 2026
e8c8331
NVIDIA: VR: SAUCE: vfio/cxl: Implement vfio_cxl_reset()
mmhonap Apr 30, 2026
aef7e33
NVIDIA: VR: SAUCE: config: Enable CONFIG_VFIO_CXL_CORE for CXL Type-2…
JiandiAnNVIDIA May 5, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Documentation/driver-api/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ of interest to most developers working on device drivers.
vfio-mediated-device
vfio
vfio-pci-device-specific-driver-acceptance
vfio-pci-cxl

Bus-level documentation
=======================
Expand Down
382 changes: 382 additions & 0 deletions Documentation/driver-api/vfio-pci-cxl.rst

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions debian.nvidia-6.17/config/annotations
Original file line number Diff line number Diff line change
Expand Up @@ -255,6 +255,8 @@ CONFIG_UBUNTU_ODM_DRIVERS note<'Disable all Ubuntu ODM dri
CONFIG_ULTRASOC_SMB policy<{'arm64': 'n'}>
CONFIG_ULTRASOC_SMB note<'Required for Grace enablement'>

CONFIG_VFIO_CXL_CORE policy<{'amd64': 'y', 'arm64': 'y'}>
CONFIG_VFIO_CXL_CORE note<'Enable VFIO CXL core for CXL Type-2 device passthrough support'>

# ---- Annotations without notes ----

Expand Down
27 changes: 27 additions & 0 deletions drivers/crypto/hisilicon/qm.c
Original file line number Diff line number Diff line change
Expand Up @@ -3005,11 +3005,36 @@ static void qm_put_pci_res(struct hisi_qm *qm)
pci_release_mem_regions(pdev);
}

static void hisi_mig_region_clear(struct hisi_qm *qm)
{
u32 val;

/* Clear migration region set of PF */
if (qm->fun_type == QM_HW_PF && qm->ver > QM_HW_V3) {
val = readl(qm->io_base + QM_MIG_REGION_SEL);
val &= ~QM_MIG_REGION_EN;
writel(val, qm->io_base + QM_MIG_REGION_SEL);
}
}

static void hisi_mig_region_enable(struct hisi_qm *qm)
{
u32 val;

/* Select migration region of PF */
if (qm->fun_type == QM_HW_PF && qm->ver > QM_HW_V3) {
val = readl(qm->io_base + QM_MIG_REGION_SEL);
val |= QM_MIG_REGION_EN;
writel(val, qm->io_base + QM_MIG_REGION_SEL);
}
}

static void hisi_qm_pci_uninit(struct hisi_qm *qm)
{
struct pci_dev *pdev = qm->pdev;

pci_free_irq_vectors(pdev);
hisi_mig_region_clear(qm);
qm_put_pci_res(qm);
pci_disable_device(pdev);
}
Expand Down Expand Up @@ -5696,6 +5721,7 @@ int hisi_qm_init(struct hisi_qm *qm)
goto err_free_qm_memory;

qm_cmd_init(qm);
hisi_mig_region_enable(qm);

return 0;

Expand Down Expand Up @@ -5834,6 +5860,7 @@ static int qm_rebuild_for_resume(struct hisi_qm *qm)
}

qm_cmd_init(qm);
hisi_mig_region_enable(qm);
hisi_qm_dev_err_init(qm);
/* Set the doorbell timeout to QM_DB_TIMEOUT_CFG ns. */
writel(QM_DB_TIMEOUT_SET, qm->io_base + QM_DB_TIMEOUT_CFG);
Expand Down
84 changes: 75 additions & 9 deletions drivers/cxl/core/pci.c
Original file line number Diff line number Diff line change
Expand Up @@ -147,16 +147,24 @@ static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id)
return 0;
}

/*
* Wait up to @media_ready_timeout for the device to report memory
* active.
/**
* cxl_await_range_active - Wait for all HDM DVSEC memory ranges to be active
* @cxlds: CXL device state (DVSEC and HDM count must be valid)
*
* For each HDM decoder range reported in the CXL DVSEC capability, waits for
* the range to report MEM INFO VALID (up to 1s per range), then MEM ACTIVE
* (up to media_ready_timeout seconds per range, default 60s). Used by
* cxl_await_media_ready() and by callers that only need range readiness
* without checking the memory device status register.
*
* Return: 0 if all ranges become valid and active, -ETIMEDOUT if a timeout
* occurs, or a negative errno from config read on failure.
*/
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
int cxl_await_range_active(struct cxl_dev_state *cxlds)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
int d = cxlds->cxl_dvsec;
int rc, i, hdm_count;
u64 md_status;
u16 cap;

rc = pci_read_config_word(pdev,
Expand All @@ -177,6 +185,23 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
return rc;
}

return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_await_range_active, "CXL");

/*
* Wait up to @media_ready_timeout for the device to report memory
* active.
*/
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
{
u64 md_status;
int rc;

rc = cxl_await_range_active(cxlds);
if (rc)
return rc;

md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET);
if (!CXLMDEV_READY(md_status))
return -EIO;
Expand Down Expand Up @@ -454,6 +479,35 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
}
EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, "CXL");

/**
* cxl_get_hdm_info - Get HDM decoder register block location and count
* @cxlds: CXL device state (must have component regs enumerated via
* cxl_probe_component_regs())
* @count: number of HDM decoders in the block (from HDM Capability bits [3:0])
* @offset: byte offset of HDM decoder block within the component register BAR
* @size: size in bytes of the HDM decoder block
*
* Return: 0 on success. -ENODEV if the HDM decoder block is not present.
*/
int cxl_get_hdm_info(struct cxl_dev_state *cxlds, u8 *count,
resource_size_t *offset, resource_size_t *size)
{
struct cxl_reg_map *hdm = &cxlds->reg_map.component_map.hdm_decoder;

if (WARN_ON(!count || !offset || !size))
return -EINVAL;

if (!hdm->valid)
return -ENODEV;

*count = hdm->count;
*offset = hdm->offset;
*size = hdm->size;

return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_get_hdm_info, "CXL");

#define CXL_DOE_TABLE_ACCESS_REQ_CODE 0x000000ff
#define CXL_DOE_TABLE_ACCESS_REQ_CODE_READ 0
#define CXL_DOE_TABLE_ACCESS_TABLE_TYPE 0x0000ff00
Expand Down Expand Up @@ -1183,7 +1237,7 @@ static void cxl_pci_functions_reset_done(struct cxl_reset_context *ctx)
/*
* CXL device reset execution
*/
static int cxl_dev_reset(struct pci_dev *pdev, int dvsec)
int cxl_dev_reset(struct pci_dev *pdev, int dvsec, bool mem_clr_en)
{
static const u32 reset_timeout_ms[] = { 10, 100, 1000, 10000, 100000 };
u16 cap, ctrl2, status2;
Expand Down Expand Up @@ -1253,7 +1307,17 @@ static int cxl_dev_reset(struct pci_dev *pdev, int dvsec)
if (rc)
return rc;

ctrl2 |= PCI_DVSEC_CXL_RST_MEM_CLR_EN;
/*
* Explicitly set or clear RST_MEM_CLR_EN rather than only
* setting it. A previous reset may have left the bit set in
* hardware; if mem_clr_en is false we must clear it so that a
* stale bit does not cause an unwanted memory-clearing reset.
*/
if (mem_clr_en)
ctrl2 |= PCI_DVSEC_CXL_RST_MEM_CLR_EN;
else
ctrl2 &= ~PCI_DVSEC_CXL_RST_MEM_CLR_EN;

rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
ctrl2);
if (rc)
Expand Down Expand Up @@ -1302,6 +1366,7 @@ static int cxl_dev_reset(struct pci_dev *pdev, int dvsec)

return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_dev_reset, "CXL");

static int match_memdev_by_parent(struct device *dev, const void *parent)
{
Expand Down Expand Up @@ -1341,7 +1406,7 @@ static int cxl_do_reset(struct pci_dev *pdev)
pci_dev_save_and_disable(pdev);
cxl_pci_functions_reset_prepare(&ctx);

rc = cxl_dev_reset(pdev, dvsec);
rc = cxl_dev_reset(pdev, dvsec, true);

cxl_pci_functions_reset_done(&ctx);

Expand Down Expand Up @@ -1370,7 +1435,7 @@ static int cxl_do_reset(struct pci_dev *pdev)
* devices under bus core serialization.
*/

static bool pci_cxl_reset_capable(struct pci_dev *pdev)
bool pci_cxl_reset_capable(struct pci_dev *pdev)
{
int dvsec;
u16 cap;
Expand All @@ -1389,6 +1454,7 @@ static bool pci_cxl_reset_capable(struct pci_dev *pdev)

return !!(cap & PCI_DVSEC_CXL_RST_CAPABLE);
}
EXPORT_SYMBOL_NS_GPL(pci_cxl_reset_capable, "CXL");

static ssize_t cxl_reset_store(struct device *dev,
struct device_attribute *attr,
Expand Down
30 changes: 30 additions & 0 deletions drivers/cxl/core/regs.c
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
decoder_cnt = cxl_hdm_decoder_count(hdr);
length = 0x20 * decoder_cnt + 0x10;
rmap = &map->hdm_decoder;
rmap->count = decoder_cnt;
break;
}
case CXL_CM_CAP_CAP_ID_RAS:
Expand Down Expand Up @@ -287,9 +288,37 @@ static bool cxl_decode_regblock(struct pci_dev *pdev, u32 reg_lo, u32 reg_hi,
map->reg_type = reg_type;
map->resource = pci_resource_start(pdev, bar) + offset;
map->max_size = pci_resource_len(pdev, bar) - offset;
map->bar_index = bar;
map->bar_offset = offset;
return true;
}

/**
* cxl_regblock_get_bar_info() - Get BAR index and offset for a BAR-backed
* regblock
* @map: Register map from cxl_find_regblock() or cxl_find_regblock_instance()
* @bar_index: Output BAR index (0-5). Optional, may be NULL.
* @bar_offset: Output offset within the BAR. Optional, may be NULL.
*
* When the register block was found via the Register Locator DVSEC and
* lives in a PCI BAR (BIR 0-5), this returns the BAR index and the offset
* within that BAR.
*
* Return: 0 if the regblock is BAR-backed (bar_index <= 5), -EINVAL otherwise.
*/
int cxl_regblock_get_bar_info(const struct cxl_register_map *map, u8 *bar_index,
resource_size_t *bar_offset)
{
if (!map || map->bar_index == 0xff)
return -EINVAL;
if (bar_index)
*bar_index = map->bar_index;
if (bar_offset)
*bar_offset = map->bar_offset;
return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_regblock_get_bar_info, "CXL");

/*
* __cxl_find_regblock_instance() - Locate a register block or count instances by type / index
* Use CXL_INSTANCES_COUNT for @index if counting instances.
Expand All @@ -308,6 +337,7 @@ static int __cxl_find_regblock_instance(struct pci_dev *pdev, enum cxl_regloc_ty

*map = (struct cxl_register_map) {
.host = &pdev->dev,
.bar_index = 0xFF,
.resource = CXL_RESOURCE_NONE,
};

Expand Down
2 changes: 0 additions & 2 deletions drivers/cxl/cxl.h
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,6 @@ static inline int ways_to_eiw(unsigned int ways, u8 *eiw)
#define CXLDEV_MBOX_BG_CMD_COMMAND_VENDOR_MASK GENMASK_ULL(63, 48)
#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20

void cxl_probe_component_regs(struct device *dev, void __iomem *base,
struct cxl_component_reg_map *map);
void cxl_probe_device_regs(struct device *dev, void __iomem *base,
struct cxl_device_reg_map *map);
int cxl_map_device_regs(const struct cxl_register_map *map,
Expand Down
Loading
Loading