Skip to content

Commit a1b501a

Browse files
dtatuleaPaolo Abeni
authored andcommitted
page_pool: Clamp pool size to max 16K pages
page_pool_init() returns E2BIG when the page_pool size goes above 32K pages. As some drivers are configuring the page_pool size according to the MTU and ring size, there are cases where this limit is exceeded and the queue creation fails. The page_pool size doesn't have to cover a full queue, especially for larger ring size. So clamp the size instead of returning an error. Do this in the core to avoid having each driver do the clamping. The current limit was deemed to high [1] so it was reduced to 16K to avoid page waste. [1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/ Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250926131605.2276734-2-dtatulea@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
1 parent 2ade917 commit a1b501a

File tree

1 file changed

+1
-5
lines changed

1 file changed

+1
-5
lines changed

net/core/page_pool.c

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -211,11 +211,7 @@ static int page_pool_init(struct page_pool *pool,
211211
return -EINVAL;
212212

213213
if (pool->p.pool_size)
214-
ring_qsize = pool->p.pool_size;
215-
216-
/* Sanity limit mem that can be pinned down */
217-
if (ring_qsize > 32768)
218-
return -E2BIG;
214+
ring_qsize = min(pool->p.pool_size, 16384);
219215

220216
/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
221217
* DMA_BIDIRECTIONAL is for allowing page used for DMA sending,

0 commit comments

Comments
 (0)