Raw, unsafe pointers, *const T
, and *mut T
.
Working with raw pointers in Rust is uncommon, typically limited to a few patterns.
Use the null
and null_mut
functions to create null pointers, and the is_null
method of the *const T
and *mut T
types to check for null. The *const T
and *mut T
types also define the offset
method, for pointer math.
&T
) or mutable reference (&mut T
).let my_num: i32 = 10; let my_num_ptr: *const i32 = &my_num; let mut my_speed: i32 = 88; let my_speed_ptr: *mut i32 = &mut my_speed;
To get a pointer to a boxed value, dereference the box:
let my_num: Box<i32> = Box::new(10); let my_num_ptr: *const i32 = &*my_num; let mut my_speed: Box<i32> = Box::new(88); let my_speed_ptr: *mut i32 = &mut *my_speed;
This does not take ownership of the original allocation and requires no resource management later, but you must not use the pointer after its lifetime.
Box<T>
).The into_raw
function consumes a box and returns the raw pointer. It doesn't destroy T
or deallocate any memory.
let my_speed: Box<i32> = Box::new(88); let my_speed: *mut i32 = Box::into_raw(my_speed); // By taking ownership of the original `Box<T>` though // we are obligated to put it together later to be destroyed. unsafe { drop(Box::from_raw(my_speed)); }
Note that here the call to drop
is for clarity - it indicates that we are done with the given value and it should be destroyed.
extern crate libc; use std::mem; fn main() { unsafe { let my_num: *mut i32 = libc::malloc(mem::size_of::<i32>()) as *mut i32; if my_num.is_null() { panic!("failed to allocate memory"); } libc::free(my_num as *mut libc::c_void); } }
Usually you wouldn't literally use malloc
and free
from Rust, but C APIs hand out a lot of pointers generally, so are a common source of raw pointers in Rust.
impl<T> *const T where
T: ?Sized,
[src]
pub fn is_null(self) -> bool
[src]
Returns true
if the pointer is null.
Note that unsized types have many possible null pointers, as only the raw data pointer is considered, not their length, vtable, etc. Therefore, two pointers that are null may still not compare equal to each other.
Basic usage:
pub fn cast<U>(self) -> *const U
[src]
Cast to a pointer to a different type
pub unsafe fn as_ref<'a>(self) -> Option<&'a T>
[src]1.9.0
Returns None
if the pointer is null, or else returns a reference to the value wrapped in Some
.
While this method and its mutable counterpart are useful for null-safety, it is important to note that this is still an unsafe operation because the returned value could be pointing to invalid memory.
When calling this method, you have to ensure that if the pointer is non-NULL, then it is properly aligned, dereferencable (for the whole size of T
) and points to an initialized instance of T
. This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)
Additionally, the lifetime 'a
returned is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. It is up to the caller to ensure that for the duration of this lifetime, the memory this pointer points to does not get written to outside of UnsafeCell<U>
.
Basic usage:
let ptr: *const u8 = &10u8 as *const u8; unsafe { if let Some(val_back) = ptr.as_ref() { println!("We got back the value: {}!", val_back); } }
If you are sure the pointer can never be null and are looking for some kind of as_ref_unchecked
that returns the &T
instead of Option<&T>
, know that you can dereference the pointer directly.
pub unsafe fn offset(self, count: isize) -> *const T
[src]
Calculates the offset from a pointer.
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset, in bytes, cannot overflow an isize
.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum, in bytes must fit in a usize.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub fn wrapping_offset(self, count: isize) -> *const T
[src]1.16.0
Calculates the offset from a pointer using wrapping arithmetic.
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
). In particular, the resulting pointer may not be used to access a different allocated object than the one self
points to. In other words, x.wrapping_offset(y.wrapping_offset_from(x))
is not the same as y
, and dereferencing it is undefined behavior unless x
and y
point into the same allocated object.
Always use .offset(count)
instead when possible, because offset
allows the compiler to optimize better. If you need to cross object boundaries, cast the pointer to an integer and do the arithmetic there.
Basic usage:
// Iterate using a raw pointer in increments of two elements let data = [1u8, 2, 3, 4, 5]; let mut ptr: *const u8 = data.as_ptr(); let step = 2; let end_rounded_up = ptr.wrapping_offset(6); // This loop prints "1, 3, 5, " while ptr != end_rounded_up { unsafe { print!("{}, ", *ptr); } ptr = ptr.wrapping_offset(step); }
pub unsafe fn offset_from(self, origin: *const T) -> isize
[src]
Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>()
.
This function is the inverse of offset
.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and other pointer must be either in bounds or one byte past the end of the same allocated object.
The distance between the pointers, in bytes, cannot overflow an isize
.
The distance between the pointers, in bytes, must be an exact multiple of the size of T
.
The distance being in bounds cannot rely on "wrapping around" the address space.
The compiler and standard library generally try to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so ptr_into_vec.offset_from(vec.as_ptr())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset_from
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
This function panics if T
is a Zero-Sized Type ("ZST").
Basic usage:
pub fn wrapping_offset_from(self, origin: *const T) -> isize
[src]
Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>()
.
If the address different between the two pointers is not a multiple of mem::size_of::<T>()
then the result of the division is rounded towards zero.
Though this method is safe for any two pointers, note that its result will be mostly useless if the two pointers aren't into the same allocated object, for example if they point to two different local variables.
This function panics if T
is a zero-sized type.
Basic usage:
#![feature(ptr_wrapping_offset_from)] let a = [0; 5]; let ptr1: *const i32 = &a[1]; let ptr2: *const i32 = &a[3]; assert_eq!(ptr2.wrapping_offset_from(ptr1), 2); assert_eq!(ptr1.wrapping_offset_from(ptr2), -2); assert_eq!(ptr1.wrapping_offset(2), ptr2); assert_eq!(ptr2.wrapping_offset(-2), ptr1); let ptr1: *const i32 = 3 as _; let ptr2: *const i32 = 13 as _; assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
pub unsafe fn add(self, count: usize) -> *const T
[src]1.26.0
Calculates the offset from a pointer (convenience for .offset(count as isize)
).
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset, in bytes, cannot overflow an isize
.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum must fit in a usize
.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub unsafe fn sub(self, count: usize) -> *const T
[src]1.26.0
Calculates the offset from a pointer (convenience for .offset((count as isize).wrapping_neg())
).
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset cannot exceed isize::MAX
bytes.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum must fit in a usize.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len()).sub(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub fn wrapping_add(self, count: usize) -> *const T
[src]1.26.0
Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset(count as isize)
)
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
).
Always use .add(count)
instead when possible, because add
allows the compiler to optimize better.
Basic usage:
// Iterate using a raw pointer in increments of two elements let data = [1u8, 2, 3, 4, 5]; let mut ptr: *const u8 = data.as_ptr(); let step = 2; let end_rounded_up = ptr.wrapping_add(6); // This loop prints "1, 3, 5, " while ptr != end_rounded_up { unsafe { print!("{}, ", *ptr); } ptr = ptr.wrapping_add(step); }
pub fn wrapping_sub(self, count: usize) -> *const T
[src]1.26.0
Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_sub())
)
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
).
Always use .sub(count)
instead when possible, because sub
allows the compiler to optimize better.
Basic usage:
// Iterate using a raw pointer in increments of two elements (backwards) let data = [1u8, 2, 3, 4, 5]; let mut ptr: *const u8 = data.as_ptr(); let start_rounded_down = ptr.wrapping_sub(2); ptr = ptr.wrapping_add(4); let step = 2; // This loop prints "5, 3, 1, " while ptr != start_rounded_down { unsafe { print!("{}, ", *ptr); } ptr = ptr.wrapping_sub(step); }
pub unsafe fn read(self) -> T
[src]1.26.0
Reads the value from self
without moving it. This leaves the memory in self
unchanged.
See ptr::read
for safety concerns and examples.
pub unsafe fn read_volatile(self) -> T
[src]1.26.0
Performs a volatile read of the value from self
without moving it. This leaves the memory in self
unchanged.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.
See ptr::read_volatile
for safety concerns and examples.
pub unsafe fn read_unaligned(self) -> T
[src]1.26.0
Reads the value from self
without moving it. This leaves the memory in self
unchanged.
Unlike read
, the pointer may be unaligned.
See ptr::read_unaligned
for safety concerns and examples.
pub unsafe fn copy_to(self, dest: *mut T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from self
to dest
. The source and destination may overlap.
NOTE: this has the same argument order as ptr::copy
.
See ptr::copy
for safety concerns and examples.
pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from self
to dest
. The source and destination may not overlap.
NOTE: this has the same argument order as ptr::copy_nonoverlapping
.
See ptr::copy_nonoverlapping
for safety concerns and examples.
pub fn align_offset(self, align: usize) -> usize
[src]1.36.0
Computes the offset that needs to be applied to the pointer in order to make it aligned to align
.
If it is not possible to align the pointer, the implementation returns usize::max_value()
.
The offset is expressed in number of T
elements, and not bytes. The value returned can be used with the offset
or offset_to
methods.
There are no guarantees whatsover that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.
The function panics if align
is not a power-of-two.
Accessing adjacent u8
as u16
let x = [5u8, 6u8, 7u8, 8u8, 9u8]; let ptr = &x[n] as *const u8; let offset = ptr.align_offset(align_of::<u16>()); if offset < x.len() - n - 1 { let u16_ptr = ptr.add(offset) as *const u16; assert_ne!(*u16_ptr, 500); } else { // while the pointer can be aligned via `offset`, it would point // outside the allocation }
impl<T> *mut T where
T: ?Sized,
[src]
pub fn is_null(self) -> bool
[src]
Returns true
if the pointer is null.
Note that unsized types have many possible null pointers, as only the raw data pointer is considered, not their length, vtable, etc. Therefore, two pointers that are null may still not compare equal to each other.
Basic usage:
pub fn cast<U>(self) -> *mut U
[src]
Cast to a pointer to a different type
pub unsafe fn as_ref<'a>(self) -> Option<&'a T>
[src]1.9.0
Returns None
if the pointer is null, or else returns a reference to the value wrapped in Some
.
While this method and its mutable counterpart are useful for null-safety, it is important to note that this is still an unsafe operation because the returned value could be pointing to invalid memory.
When calling this method, you have to ensure that if the pointer is non-NULL, then it is properly aligned, dereferencable (for the whole size of T
) and points to an initialized instance of T
. This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is, the only safe approach is to ensure that they are indeed initialized.)
Additionally, the lifetime 'a
returned is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. It is up to the caller to ensure that for the duration of this lifetime, the memory this pointer points to does not get written to outside of UnsafeCell<U>
.
Basic usage:
let ptr: *mut u8 = &mut 10u8 as *mut u8; unsafe { if let Some(val_back) = ptr.as_ref() { println!("We got back the value: {}!", val_back); } }
If you are sure the pointer can never be null and are looking for some kind of as_ref_unchecked
that returns the &T
instead of Option<&T>
, know that you can dereference the pointer directly.
pub unsafe fn offset(self, count: isize) -> *mut T
[src]
Calculates the offset from a pointer.
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset, in bytes, cannot overflow an isize
.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum, in bytes must fit in a usize.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub fn wrapping_offset(self, count: isize) -> *mut T
[src]1.16.0
Calculates the offset from a pointer using wrapping arithmetic. count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
). In particular, the resulting pointer may not be used to access a different allocated object than the one self
points to. In other words, x.wrapping_offset(y.wrapping_offset_from(x))
is not the same as y
, and dereferencing it is undefined behavior unless x
and y
point into the same allocated object.
Always use .offset(count)
instead when possible, because offset
allows the compiler to optimize better. If you need to cross object boundaries, cast the pointer to an integer and do the arithmetic there.
Basic usage:
// Iterate using a raw pointer in increments of two elements let mut data = [1u8, 2, 3, 4, 5]; let mut ptr: *mut u8 = data.as_mut_ptr(); let step = 2; let end_rounded_up = ptr.wrapping_offset(6); while ptr != end_rounded_up { unsafe { *ptr = 0; } ptr = ptr.wrapping_offset(step); } assert_eq!(&data, &[0, 2, 0, 4, 0]);
pub unsafe fn as_mut<'a>(self) -> Option<&'a mut T>
[src]1.9.0
Returns None
if the pointer is null, or else returns a mutable reference to the value wrapped in Some
.
As with as_ref
, this is unsafe because it cannot verify the validity of the returned pointer, nor can it ensure that the lifetime 'a
returned is indeed a valid lifetime for the contained data.
When calling this method, you have to ensure that if the pointer is non-NULL, then it is properly aligned, dereferencable (for the whole size of T
) and points to an initialized instance of T
. This applies even if the result of this method is unused! (The part about being initialized is not yet fully decided, but until it is the only safe approach is to ensure that they are indeed initialized.)
Additionally, the lifetime 'a
returned is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. It is up to the caller to ensure that for the duration of this lifetime, the memory this pointer points to does not get accessed through any other pointer.
Basic usage:
pub unsafe fn offset_from(self, origin: *const T) -> isize
[src]
Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>()
.
This function is the inverse of offset
.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and other pointer must be either in bounds or one byte past the end of the same allocated object.
The distance between the pointers, in bytes, cannot overflow an isize
.
The distance between the pointers, in bytes, must be an exact multiple of the size of T
.
The distance being in bounds cannot rely on "wrapping around" the address space.
The compiler and standard library generally try to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so ptr_into_vec.offset_from(vec.as_ptr())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset_from
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
This function panics if T
is a Zero-Sized Type ("ZST").
Basic usage:
pub fn wrapping_offset_from(self, origin: *const T) -> isize
[src]
Calculates the distance between two pointers. The returned value is in units of T: the distance in bytes is divided by mem::size_of::<T>()
.
If the address different between the two pointers is not a multiple of mem::size_of::<T>()
then the result of the division is rounded towards zero.
Though this method is safe for any two pointers, note that its result will be mostly useless if the two pointers aren't into the same allocated object, for example if they point to two different local variables.
This function panics if T
is a zero-sized type.
Basic usage:
#![feature(ptr_wrapping_offset_from)] let mut a = [0; 5]; let ptr1: *mut i32 = &mut a[1]; let ptr2: *mut i32 = &mut a[3]; assert_eq!(ptr2.wrapping_offset_from(ptr1), 2); assert_eq!(ptr1.wrapping_offset_from(ptr2), -2); assert_eq!(ptr1.wrapping_offset(2), ptr2); assert_eq!(ptr2.wrapping_offset(-2), ptr1); let ptr1: *mut i32 = 3 as _; let ptr2: *mut i32 = 13 as _; assert_eq!(ptr2.wrapping_offset_from(ptr1), 2);
pub unsafe fn add(self, count: usize) -> *mut T
[src]1.26.0
Calculates the offset from a pointer (convenience for .offset(count as isize)
).
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset, in bytes, cannot overflow an isize
.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum must fit in a usize
.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub unsafe fn sub(self, count: usize) -> *mut T
[src]1.26.0
Calculates the offset from a pointer (convenience for .offset((count as isize).wrapping_neg())
).
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
If any of the following conditions are violated, the result is Undefined Behavior:
Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object.
The computed offset cannot exceed isize::MAX
bytes.
The offset being in bounds cannot rely on "wrapping around" the address space. That is, the infinite-precision sum must fit in a usize.
The compiler and standard library generally tries to ensure allocations never reach a size where an offset is a concern. For instance, Vec
and Box
ensure they never allocate more than isize::MAX
bytes, so vec.as_ptr().add(vec.len()).sub(vec.len())
is always safe.
Most platforms fundamentally can't even construct such an allocation. For instance, no known 64-bit platform can ever serve a request for 263 bytes due to page-table limitations or splitting the address space. However, some 32-bit and 16-bit platforms may successfully serve a request for more than isize::MAX
bytes with things like Physical Address Extension. As such, memory acquired directly from allocators or memory mapped files may be too large to handle with this function.
Consider using wrapping_offset
instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations.
Basic usage:
pub fn wrapping_add(self, count: usize) -> *mut T
[src]1.26.0
Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset(count as isize)
)
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
).
Always use .add(count)
instead when possible, because add
allows the compiler to optimize better.
Basic usage:
// Iterate using a raw pointer in increments of two elements let data = [1u8, 2, 3, 4, 5]; let mut ptr: *const u8 = data.as_ptr(); let step = 2; let end_rounded_up = ptr.wrapping_add(6); // This loop prints "1, 3, 5, " while ptr != end_rounded_up { unsafe { print!("{}, ", *ptr); } ptr = ptr.wrapping_add(step); }
pub fn wrapping_sub(self, count: usize) -> *mut T
[src]1.26.0
Calculates the offset from a pointer using wrapping arithmetic. (convenience for .wrapping_offset((count as isize).wrapping_sub())
)
count
is in units of T; e.g., a count
of 3 represents a pointer offset of 3 * size_of::<T>()
bytes.
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference (which requires unsafe
).
Always use .sub(count)
instead when possible, because sub
allows the compiler to optimize better.
Basic usage:
// Iterate using a raw pointer in increments of two elements (backwards) let data = [1u8, 2, 3, 4, 5]; let mut ptr: *const u8 = data.as_ptr(); let start_rounded_down = ptr.wrapping_sub(2); ptr = ptr.wrapping_add(4); let step = 2; // This loop prints "5, 3, 1, " while ptr != start_rounded_down { unsafe { print!("{}, ", *ptr); } ptr = ptr.wrapping_sub(step); }
pub unsafe fn read(self) -> T
[src]1.26.0
Reads the value from self
without moving it. This leaves the memory in self
unchanged.
See ptr::read
for safety concerns and examples.
pub unsafe fn read_volatile(self) -> T
[src]1.26.0
Performs a volatile read of the value from self
without moving it. This leaves the memory in self
unchanged.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.
See ptr::read_volatile
for safety concerns and examples.
pub unsafe fn read_unaligned(self) -> T
[src]1.26.0
Reads the value from self
without moving it. This leaves the memory in self
unchanged.
Unlike read
, the pointer may be unaligned.
See ptr::read_unaligned
for safety concerns and examples.
pub unsafe fn copy_to(self, dest: *mut T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from self
to dest
. The source and destination may overlap.
NOTE: this has the same argument order as ptr::copy
.
See ptr::copy
for safety concerns and examples.
pub unsafe fn copy_to_nonoverlapping(self, dest: *mut T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from self
to dest
. The source and destination may not overlap.
NOTE: this has the same argument order as ptr::copy_nonoverlapping
.
See ptr::copy_nonoverlapping
for safety concerns and examples.
pub unsafe fn copy_from(self, src: *const T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from src
to self
. The source and destination may overlap.
NOTE: this has the opposite argument order of ptr::copy
.
See ptr::copy
for safety concerns and examples.
pub unsafe fn copy_from_nonoverlapping(self, src: *const T, count: usize)
[src]1.26.0
Copies count * size_of<T>
bytes from src
to self
. The source and destination may not overlap.
NOTE: this has the opposite argument order of ptr::copy_nonoverlapping
.
See ptr::copy_nonoverlapping
for safety concerns and examples.
pub unsafe fn drop_in_place(self)
[src]1.26.0
Executes the destructor (if any) of the pointed-to value.
See ptr::drop_in_place
for safety concerns and examples.
pub unsafe fn write(self, val: T)
[src]1.26.0
Overwrites a memory location with the given value without reading or dropping the old value.
See ptr::write
for safety concerns and examples.
pub unsafe fn write_bytes(self, val: u8, count: usize)
[src]1.26.0
Invokes memset on the specified pointer, setting count * size_of::<T>()
bytes of memory starting at self
to val
.
See ptr::write_bytes
for safety concerns and examples.
pub unsafe fn write_volatile(self, val: T)
[src]1.26.0
Performs a volatile write of a memory location with the given value without reading or dropping the old value.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reordered by the compiler across other volatile operations.
See ptr::write_volatile
for safety concerns and examples.
pub unsafe fn write_unaligned(self, val: T)
[src]1.26.0
Overwrites a memory location with the given value without reading or dropping the old value.
Unlike write
, the pointer may be unaligned.
See ptr::write_unaligned
for safety concerns and examples.
pub unsafe fn replace(self, src: T) -> T
[src]1.26.0
Replaces the value at self
with src
, returning the old value, without dropping either.
See ptr::replace
for safety concerns and examples.
pub unsafe fn swap(self, with: *mut T)
[src]1.26.0
Swaps the values at two mutable locations of the same type, without deinitializing either. They may overlap, unlike mem::swap
which is otherwise equivalent.
See ptr::swap
for safety concerns and examples.
pub fn align_offset(self, align: usize) -> usize
[src]1.36.0
Computes the offset that needs to be applied to the pointer in order to make it aligned to align
.
If it is not possible to align the pointer, the implementation returns usize::max_value()
.
The offset is expressed in number of T
elements, and not bytes. The value returned can be used with the offset
or offset_to
methods.
There are no guarantees whatsover that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.
The function panics if align
is not a power-of-two.
Accessing adjacent u8
as u16
let x = [5u8, 6u8, 7u8, 8u8, 9u8]; let ptr = &x[n] as *const u8; let offset = ptr.align_offset(align_of::<u16>()); if offset < x.len() - n - 1 { let u16_ptr = ptr.add(offset) as *const u16; assert_ne!(*u16_ptr, 500); } else { // while the pointer can be aligned via `offset`, it would point // outside the allocation }
impl<T> Copy for *const T where
T: ?Sized,
[src]
impl<T> Copy for *mut T where
T: ?Sized,
[src]
impl<T> !Sync for *mut T where
T: ?Sized,
[src]
impl<T> !Sync for *const T where
T: ?Sized,
[src]
impl<T> Hash for *mut T where
T: ?Sized,
[src]
fn hash<H>(&self, state: &mut H) where
H: Hasher,
[src]
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
[src]1.3.0
Feeds a slice of this type into the given [Hasher
]. Read more
impl<T> Hash for *const T where
T: ?Sized,
[src]
fn hash<H>(&self, state: &mut H) where
H: Hasher,
[src]
fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
[src]1.3.0
Feeds a slice of this type into the given [Hasher
]. Read more
impl<T> Clone for *const T where
T: ?Sized,
[src]
fn clone(&self) -> *const T
[src]
fn clone_from(&mut self, source: &Self)
[src]
Performs copy-assignment from source
. Read more
impl<T> Clone for *mut T where
T: ?Sized,
[src]
fn clone(&self) -> *mut T
[src]
fn clone_from(&mut self, source: &Self)
[src]
Performs copy-assignment from source
. Read more
impl<T> Eq for *const T where
T: ?Sized,
[src]
impl<T> Eq for *mut T where
T: ?Sized,
[src]
impl<T> PartialOrd<*mut T> for *mut T where
T: ?Sized,
[src]
fn partial_cmp(&self, other: &*mut T) -> Option<Ordering>
[src]
fn lt(&self, other: &*mut T) -> bool
[src]
fn le(&self, other: &*mut T) -> bool
[src]
fn gt(&self, other: &*mut T) -> bool
[src]
fn ge(&self, other: &*mut T) -> bool
[src]
impl<T> PartialOrd<*const T> for *const T where
T: ?Sized,
[src]
fn partial_cmp(&self, other: &*const T) -> Option<Ordering>
[src]
fn lt(&self, other: &*const T) -> bool
[src]
fn le(&self, other: &*const T) -> bool
[src]
fn gt(&self, other: &*const T) -> bool
[src]
fn ge(&self, other: &*const T) -> bool
[src]
impl<T> Ord for *const T where
T: ?Sized,
[src]
fn cmp(&self, other: &*const T) -> Ordering
[src]
fn max(self, other: Self) -> Self
[src]1.21.0
Compares and returns the maximum of two values. Read more
fn min(self, other: Self) -> Self
[src]1.21.0
Compares and returns the minimum of two values. Read more
fn clamp(self, min: Self, max: Self) -> Self
[src]
Restrict a value to a certain interval. Read more
impl<T> Ord for *mut T where
T: ?Sized,
[src]
fn cmp(&self, other: &*mut T) -> Ordering
[src]
fn max(self, other: Self) -> Self
[src]1.21.0
Compares and returns the maximum of two values. Read more
fn min(self, other: Self) -> Self
[src]1.21.0
Compares and returns the minimum of two values. Read more
fn clamp(self, min: Self, max: Self) -> Self
[src]
Restrict a value to a certain interval. Read more
impl<T> Debug for *const T where
T: ?Sized,
[src]
impl<T> Debug for *mut T where
T: ?Sized,
[src]
impl<T> PartialEq<*const T> for *const T where
T: ?Sized,
[src]
fn eq(&self, other: &*const T) -> bool
[src]
fn ne(&self, other: &Rhs) -> bool
[src]
This method tests for !=
.
impl<T> PartialEq<*mut T> for *mut T where
T: ?Sized,
[src]
fn eq(&self, other: &*mut T) -> bool
[src]
fn ne(&self, other: &Rhs) -> bool
[src]
This method tests for !=
.
impl<T> Pointer for *const T where
T: ?Sized,
[src]
impl<T> Pointer for *mut T where
T: ?Sized,
[src]
impl<T> !Send for *const T where
T: ?Sized,
[src]
impl<T> !Send for *mut T where
T: ?Sized,
[src]
impl<T, U> CoerceUnsized<*mut U> for *mut T where
T: Unsize<U> + ?Sized,
U: ?Sized,
[src]
impl<T, U> CoerceUnsized<*const U> for *const T where
T: Unsize<U> + ?Sized,
U: ?Sized,
[src]
impl<T, U> CoerceUnsized<*const U> for *mut T where
T: Unsize<U> + ?Sized,
U: ?Sized,
[src]
impl<T, U> DispatchFromDyn<*const U> for *const T where
T: Unsize<U> + ?Sized,
U: ?Sized,
[src]
impl<T, U> DispatchFromDyn<*mut U> for *mut T where
T: Unsize<U> + ?Sized,
U: ?Sized,
[src]
impl<T: RefUnwindSafe + ?Sized> UnwindSafe for *const T
[src]1.9.0
impl<T: RefUnwindSafe + ?Sized> UnwindSafe for *mut T
[src]1.9.0
impl<T: ?Sized> RefUnwindSafe for *const T where
T: RefUnwindSafe,
impl<T: ?Sized> Unpin for *const T where
T: Unpin,
impl<T: ?Sized> RefUnwindSafe for *mut T where
T: RefUnwindSafe,
impl<T: ?Sized> Unpin for *mut T where
T: Unpin,
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
impl<T> From<T> for T
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
fn borrow(&self) -> &T
[src]
impl<'_, F> Future for &'_ mut F where F: Unpin + Future + ?Sized, type Output = <F as Future>::Output; impl<'_, I> Iterator for &'_ mut I where I: Iterator + ?Sized, type Item = <I as Iterator>::Item; impl<'_, R: Read + ?Sized> Read for &'_ mut R impl<'_, W: Write + ?Sized> Write for &'_ mut W
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
fn borrow_mut(&mut self) -> &mut T
[src]
impl<'_, F> Future for &'_ mut F where F: Unpin + Future + ?Sized, type Output = <F as Future>::Output; impl<'_, I> Iterator for &'_ mut I where I: Iterator + ?Sized, type Item = <I as Iterator>::Item; impl<'_, R: Read + ?Sized> Read for &'_ mut R impl<'_, W: Write + ?Sized> Write for &'_ mut W
impl<T> Any for T where
T: 'static + ?Sized,
[src]
impl<T> ToOwned for T where
T: Clone,
[src]
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
impl<T> From<T> for T
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
fn borrow(&self) -> &T
[src]
impl<'_, F> Future for &'_ mut F where F: Unpin + Future + ?Sized, type Output = <F as Future>::Output; impl<'_, I> Iterator for &'_ mut I where I: Iterator + ?Sized, type Item = <I as Iterator>::Item; impl<'_, R: Read + ?Sized> Read for &'_ mut R impl<'_, W: Write + ?Sized> Write for &'_ mut W
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
fn borrow_mut(&mut self) -> &mut T
[src]
impl<'_, F> Future for &'_ mut F where F: Unpin + Future + ?Sized, type Output = <F as Future>::Output; impl<'_, I> Iterator for &'_ mut I where I: Iterator + ?Sized, type Item = <I as Iterator>::Item; impl<'_, R: Read + ?Sized> Read for &'_ mut R impl<'_, W: Write + ?Sized> Write for &'_ mut W
impl<T> Any for T where
T: 'static + ?Sized,
[src]
impl<T> ToOwned for T where
T: Clone,
[src]
© 2010 The Rust Project Developers
Licensed under the Apache License, Version 2.0 or the MIT license, at your option.
https://doc.rust-lang.org/std/primitive.pointer.html