Skip to content

Commit 5b6bf8d

Browse files
XanClicbonzini
authored andcommitted
GuestMemory: Add get_slices()
With virtual memory, seemingly consecutive I/O virtual memory regions may actually be fragmented across multiple pages in our userspace mapping. Existing `descriptor_utils::Reader::new()` (and `Writer`) implementations (e.g. in virtiofsd or vm-virtio/virtio-queue) use `GuestMemory::get_slice()` to turn guest memory address ranges into valid slices in our address space; but with this fragmentation, it is easily possible that a range no longer corresponds to a single slice. To fix this, add a `get_slices()` method that iterates over potentially multiple slices instead of a single one. We should probably also deprecate `get_slice()`, but I’m hesitant to do it in the same commit/PR. (We could also try to use `try_access()` as an existing internal iterator instead of this new external iterator, which would require adding lifetimes to `try_access()` so the region and thus slices derived from it could be moved outside of the closure. However, that will not work for virtual memory that we are going to introduce later: It will have a dirty bitmap that is independent of the one in guest memory regions, so its `try_access()` function will need to dirty it after the access. Therefore, the access must happen in that closure and the reference to the region must not be moved outside.) Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
1 parent 930ab47 commit 5b6bf8d

File tree

4 files changed

+148
-1
lines changed

4 files changed

+148
-1
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
is now a type alias for `GuestRegionContainer<GuestRegionMmap>`).
1010
- \[[#338](https://github.com/rust-vmm/vm-memory/pull/338)\] Make `GuestMemoryAtomic` always implement `Clone`.
1111
- \[[#338](https://github.com/rust-vmm/vm-memory/pull/338)\] Make `GuestAddressSpace` a subtrait of `Clone`.
12+
- \[[#339](https://github.com/rust-vmm/vm-memory/pull/339)\] Add `GuestMemory::get_slices()`
1213

1314
### Changed
1415

src/bitmap/mod.rs

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,9 @@ pub(crate) mod tests {
266266
let dirty_len = size_of_val(&val);
267267

268268
let (region, region_addr) = m.to_region_addr(dirty_addr).unwrap();
269-
let slice = m.get_slice(dirty_addr, dirty_len).unwrap();
269+
let mut slices = m.get_slices(dirty_addr, dirty_len);
270+
let slice = slices.next().unwrap().unwrap();
271+
assert!(slices.next().is_none());
270272

271273
assert!(range_is_clean(&region.bitmap(), 0, region.len() as usize));
272274
assert!(range_is_clean(slice.bitmap(), 0, dirty_len));

src/guest_memory.rs

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@
4444
use std::convert::From;
4545
use std::fs::File;
4646
use std::io;
47+
use std::iter::FusedIterator;
4748
use std::ops::{BitAnd, BitOr, Deref};
4849
use std::rc::Rc;
4950
use std::sync::atomic::Ordering;
@@ -455,8 +456,101 @@ pub trait GuestMemory {
455456
.ok_or(Error::InvalidGuestAddress(addr))
456457
.and_then(|(r, addr)| r.get_slice(addr, count))
457458
}
459+
460+
/// Returns an iterator over [`VolatileSlice`](struct.VolatileSlice.html)s, together covering
461+
/// `count` bytes starting at `addr`.
462+
///
463+
/// Iterating in this way is necessary because the given address range may be fragmented across
464+
/// multiple [`GuestMemoryRegion`]s.
465+
///
466+
/// The iterator’s items are wrapped in [`Result`], i.e. errors are reported on individual
467+
/// items. If there is no such error, the cumulative length of all items will be equal to
468+
/// `count`. If `count` is 0, an empty iterator will be returned.
469+
fn get_slices<'a>(
470+
&'a self,
471+
addr: GuestAddress,
472+
count: usize,
473+
) -> GuestMemorySliceIterator<'a, Self> {
474+
GuestMemorySliceIterator {
475+
mem: self,
476+
addr,
477+
count,
478+
}
479+
}
480+
}
481+
482+
/// Iterates over [`VolatileSlice`]s that together form a guest memory area.
483+
///
484+
/// Returned by [`GuestMemory::get_slices()`].
485+
#[derive(Debug)]
486+
pub struct GuestMemorySliceIterator<'a, M: GuestMemory + ?Sized> {
487+
/// Underlying memory
488+
mem: &'a M,
489+
/// Next address in the guest memory area
490+
addr: GuestAddress,
491+
/// Remaining bytes in the guest memory area
492+
count: usize,
493+
}
494+
495+
impl<'a, M: GuestMemory + ?Sized> GuestMemorySliceIterator<'a, M> {
496+
/// Helper function for [`<Self as Iterator>::next()`](GuestMemorySliceIterator::next).
497+
///
498+
/// Get the next slice (i.e. the one starting from `self.addr` with a length up to
499+
/// `self.count`) and update the internal state.
500+
///
501+
/// # Safety
502+
///
503+
/// This function does not reset to `self.count` to 0 in case of error, i.e. will not stop
504+
/// iterating. Actual behavior after an error is ill-defined, so the caller must check the
505+
/// return value, and in case of an error, reset `self.count` to 0.
506+
///
507+
/// (This is why this function exists, so this resetting can be done in a single central
508+
/// location.)
509+
unsafe fn do_next(&mut self) -> Option<Result<VolatileSlice<'a, MS<'a, M>>>> {
510+
if self.count == 0 {
511+
return None;
512+
}
513+
514+
let Some((region, start)) = self.mem.to_region_addr(self.addr) else {
515+
return Some(Err(Error::InvalidGuestAddress(self.addr)));
516+
};
517+
518+
let cap = region.len() - start.raw_value();
519+
let len = std::cmp::min(cap, self.count as GuestUsize);
520+
521+
self.count -= len as usize;
522+
self.addr = match self.addr.overflowing_add(len as GuestUsize) {
523+
(x @ GuestAddress(0), _) | (x, false) => x,
524+
(_, true) => return Some(Err(Error::GuestAddressOverflow)),
525+
};
526+
527+
Some(region.get_slice(start, len as usize))
528+
}
529+
}
530+
531+
impl<'a, M: GuestMemory + ?Sized> Iterator for GuestMemorySliceIterator<'a, M> {
532+
type Item = Result<VolatileSlice<'a, MS<'a, M>>>;
533+
534+
fn next(&mut self) -> Option<Self::Item> {
535+
// SAFETY:
536+
// We reset `self.count` to 0 on error
537+
match unsafe { self.do_next() } {
538+
Some(Ok(slice)) => Some(Ok(slice)),
539+
other => {
540+
// On error (or end), reset to 0 so iteration remains stopped
541+
self.count = 0;
542+
other
543+
}
544+
}
545+
}
458546
}
459547

548+
/// This iterator continues to return `None` when exhausted.
549+
///
550+
/// [`<Self as Iterator>::next()`](GuestMemorySliceIterator::next) sets `self.count` to 0 when
551+
/// returning `None`, ensuring that it will only return `None` from that point on.
552+
impl<M: GuestMemory + ?Sized> FusedIterator for GuestMemorySliceIterator<'_, M> {}
553+
460554
impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
461555
type E = Error;
462556

src/mmap/mod.rs

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -624,6 +624,56 @@ mod tests {
624624
assert!(guest_mem.get_slice(GuestAddress(0xc00), 0x100).is_err());
625625
}
626626

627+
#[test]
628+
fn test_guest_memory_get_slices() {
629+
let start_addr1 = GuestAddress(0);
630+
let start_addr2 = GuestAddress(0x800);
631+
let start_addr3 = GuestAddress(0xc00);
632+
let guest_mem = GuestMemoryMmap::from_ranges(&[
633+
(start_addr1, 0x400),
634+
(start_addr2, 0x400),
635+
(start_addr3, 0x400),
636+
])
637+
.unwrap();
638+
639+
// Same cases as `test_guest_memory_get_slice()`, just with `get_slices()`.
640+
let slice_size = 0x200;
641+
let mut slices = guest_mem.get_slices(GuestAddress(0x100), slice_size);
642+
let slice = slices.next().unwrap().unwrap();
643+
assert!(slices.next().is_none());
644+
assert_eq!(slice.len(), slice_size);
645+
646+
let slice_size = 0x400;
647+
let mut slices = guest_mem.get_slices(GuestAddress(0x800), slice_size);
648+
let slice = slices.next().unwrap().unwrap();
649+
assert!(slices.next().is_none());
650+
assert_eq!(slice.len(), slice_size);
651+
652+
// Empty iterator.
653+
assert!(guest_mem
654+
.get_slices(GuestAddress(0x900), 0)
655+
.next()
656+
.is_none());
657+
658+
// Error cases, wrong size or base address.
659+
let mut slices = guest_mem.get_slices(GuestAddress(0), 0x500);
660+
assert_eq!(slices.next().unwrap().unwrap().len(), 0x400);
661+
assert!(slices.next().unwrap().is_err());
662+
assert!(slices.next().is_none());
663+
let mut slices = guest_mem.get_slices(GuestAddress(0x600), 0x100);
664+
assert!(slices.next().unwrap().is_err());
665+
assert!(slices.next().is_none());
666+
let mut slices = guest_mem.get_slices(GuestAddress(0x1000), 0x100);
667+
assert!(slices.next().unwrap().is_err());
668+
assert!(slices.next().is_none());
669+
670+
// Test fragmented case
671+
let mut slices = guest_mem.get_slices(GuestAddress(0xa00), 0x400);
672+
assert_eq!(slices.next().unwrap().unwrap().len(), 0x200);
673+
assert_eq!(slices.next().unwrap().unwrap().len(), 0x200);
674+
assert!(slices.next().is_none());
675+
}
676+
627677
#[test]
628678
fn test_atomic_accesses() {
629679
let region = GuestRegionMmap::from_range(GuestAddress(0), 0x1000, None).unwrap();

0 commit comments

Comments
 (0)