portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- tidy:crate-doc:start -->
5Portable atomic types including support for 128-bit atomics, atomic float, etc.
6
7- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
8- Provide `AtomicI128` and `AtomicU128`.
9- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
10- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
11- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
12- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108).
13- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
14- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
15
16<!-- TODO:
17- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
18- mention optimizations not available in the standard library's equivalents
19-->
20
21portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
22
23## Usage
24
25Add this to your `Cargo.toml`:
26
27```toml
28[dependencies]
29portable-atomic = "1"
30```
31
32The default features are mainly for users who use atomics larger than the pointer width.
33If you don't need them, disabling the default features may reduce code size and compile time slightly.
34
35```toml
36[dependencies]
37portable-atomic = { version = "1", default-features = false }
38```
39
40If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
41
42```toml
43[dependencies]
44portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
45```
46
47## 128-bit atomics support
48
49Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (nightly only), otherwise the fallback implementation is used.
50
51On x86_64, even if `cmpxchg16b` is not available at compile-time (note: `cmpxchg16b` target feature is enabled by default only on Apple and Windows (except Windows 7) targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
52
53They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
54
55See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
56
57## Optional features
58
59- **`fallback`** *(enabled by default)*<br>
60  Enable fallback implementations.
61
62  Disabling this allows only atomic types for which the platform natively supports atomic operations.
63
64- <a name="optional-features-float"></a>**`float`**<br>
65  Provide `AtomicF{32,64}`.
66
67  Note that most of `fetch_*` operations of atomic floats are implemented using CAS loops, which can be slower than equivalent operations of atomic integers. ([GPU targets have atomic instructions for float, so we plan to use these instructions for GPU targets in the future.](https://github.com/taiki-e/portable-atomic/issues/34))
68
69- **`std`**<br>
70  Use `std`.
71
72- <a name="optional-features-require-cas"></a>**`require-cas`**<br>
73  Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more.
74
75- <a name="optional-features-serde"></a>**`serde`**<br>
76  Implement `serde::{Serialize,Deserialize}` for atomic types.
77
78  Note:
79  - The MSRV when this feature is enabled depends on the MSRV of [serde].
80
81- <a name="optional-features-critical-section"></a>**`critical-section`**<br>
82  When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where
83  it is not natively available. When enabling it, you should provide a suitable critical section implementation
84  for the current target, see the [critical-section] documentation for details on how to do so.
85
86  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used,
87  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
88  needs extra care due to e.g. real-time requirements.
89
90  Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with
91  [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but
92  additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if
93  all the `critical-section` implementation for your target does is disable interrupts, prefer using
94  `unsafe-assume-single-core` feature instead.
95
96  Note:
97  - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
98  - It is usually *not* recommended to always enable this feature in dependencies of the library.
99
100    Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future).
101
102    The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
103
104    As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
105
106    ```toml
107    [dependencies]
108    portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
109    crate-provides-critical-section-impl = "..."
110    crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
111    ```
112
113- <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br>
114  Assume that the target is single-core.
115  When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
116
117  This feature is `unsafe`, and note the following safety requirements:
118  - Enabling this feature for multi-core systems is always **unsound**.
119  - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
120    Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
121
122    The following are known cases:
123    - On pre-v6 Arm, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together.
124    - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
125
126    See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
127
128  Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature.
129
130  It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
131
132  Armv6-M (thumbv6m), pre-v6 Arm (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported.
133
134  Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature.
135
136  Enabling this feature for targets that have atomic CAS will result in a compile error.
137
138  Feel free to submit an issue if your target is not supported yet.
139
140## Optional cfg
141
142One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
143
144```toml
145# .cargo/config.toml
146[target.<target>]
147rustflags = ["--cfg", "portable_atomic_no_outline_atomics"]
148```
149
150Or set environment variable:
151
152```sh
153RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ...
154```
155
156- <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br>
157  Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core).
158
159  Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more.
160
161- <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br>
162  Disable dynamic dispatching by run-time CPU feature detection.
163
164  If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
165
166  Note:
167  - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V (disabled by default), Arm64EC, and powerpc64, otherwise it works the same as when this cfg is set.
168  - If the required target features are enabled at compile-time, the atomic operations are inlined.
169  - This is compatible with no-std (as with all features except `std`).
170  - On some targets, run-time detection is disabled by default mainly for incomplete build environments, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.)
171  - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
172
173  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
174
175## Related Projects
176
177- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
178- [atomic-memcpy]: Byte-wise atomic memcpy.
179
180[#60]: https://github.com/taiki-e/portable-atomic/issues/60
181[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
182[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
183[critical-section]: https://github.com/rust-embedded/critical-section
184[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
185[serde]: https://github.com/serde-rs/serde
186
187<!-- tidy:crate-doc:end -->
188*/
189
190#![no_std]
191#![doc(test(
192    no_crate_inject,
193    attr(
194        deny(warnings, rust_2018_idioms, single_use_lifetimes),
195        allow(dead_code, unused_variables)
196    )
197))]
198#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
199#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
200#![warn(
201    // Lints that may help when writing public library.
202    missing_debug_implementations,
203    // missing_docs,
204    clippy::alloc_instead_of_core,
205    clippy::exhaustive_enums,
206    clippy::exhaustive_structs,
207    clippy::impl_trait_in_params,
208    clippy::missing_inline_in_public_items,
209    clippy::std_instead_of_alloc,
210    clippy::std_instead_of_core,
211    // Code outside of cfg(feature = "float") shouldn't use float.
212    clippy::float_arithmetic,
213)]
214#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
215#![allow(clippy::inline_always, clippy::used_underscore_items)]
216// asm_experimental_arch
217// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
218// On tier 2 platforms (powerpc64), we use cfg set by build script to
219// determine whether this feature is available or not.
220#![cfg_attr(
221    all(
222        not(portable_atomic_no_asm),
223        any(
224            target_arch = "avr",
225            target_arch = "msp430",
226            all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
227            all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
228        ),
229    ),
230    feature(asm_experimental_arch)
231)]
232// Old nightly only
233// These features are already stabilized or have already been removed from compilers,
234// and can safely be enabled for old nightly as long as version detection works.
235// - cfg(target_has_atomic)
236// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x
237// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
238// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
239// This also helps us test that our assembly code works with the minimum external
240// LLVM version of the first rustc version that inline assembly stabilized.
241#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
242#![cfg_attr(
243    all(
244        portable_atomic_unstable_asm,
245        any(
246            target_arch = "aarch64",
247            target_arch = "arm",
248            target_arch = "riscv32",
249            target_arch = "riscv64",
250            target_arch = "x86",
251            target_arch = "x86_64",
252        ),
253    ),
254    feature(asm)
255)]
256#![cfg_attr(
257    all(
258        portable_atomic_unstable_asm_experimental_arch,
259        any(target_arch = "arm64ec", target_arch = "s390x"),
260    ),
261    feature(asm_experimental_arch)
262)]
263#![cfg_attr(
264    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
265    feature(llvm_asm)
266)]
267#![cfg_attr(
268    all(
269        target_arch = "arm",
270        portable_atomic_unstable_isa_attribute,
271        any(test, portable_atomic_unsafe_assume_single_core),
272        not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
273        not(target_has_atomic = "ptr"),
274    ),
275    feature(isa_attribute)
276)]
277// Miri and/or ThreadSanitizer only
278// They do not support inline assembly, so we need to use unstable features instead.
279// Since they require nightly compilers anyway, we can use the unstable features.
280// This is not an ideal situation, but it is still better than always using lock-based
281// fallback and causing memory ordering problems to be missed by these checkers.
282#![cfg_attr(
283    all(
284        any(
285            target_arch = "aarch64",
286            target_arch = "arm64ec",
287            target_arch = "powerpc64",
288            target_arch = "riscv64",
289            target_arch = "s390x",
290        ),
291        any(miri, portable_atomic_sanitize_thread),
292    ),
293    allow(internal_features)
294)]
295#![cfg_attr(
296    all(
297        any(
298            target_arch = "aarch64",
299            target_arch = "arm64ec",
300            target_arch = "powerpc64",
301            target_arch = "riscv64",
302            target_arch = "s390x",
303        ),
304        any(miri, portable_atomic_sanitize_thread),
305    ),
306    feature(core_intrinsics)
307)]
308// docs.rs only (cfg is enabled by docs.rs, not build script)
309#![cfg_attr(docsrs, feature(doc_cfg))]
310#![cfg_attr(
311    all(
312        portable_atomic_no_atomic_load_store,
313        not(any(
314            target_arch = "avr",
315            target_arch = "bpf",
316            target_arch = "msp430",
317            target_arch = "riscv32",
318            target_arch = "riscv64",
319            feature = "critical-section",
320        )),
321    ),
322    allow(unused_imports, unused_macros)
323)]
324
325// There are currently no 128-bit or higher builtin targets.
326// (Although some of our generic code is written with the future
327// addition of 128-bit targets in mind.)
328// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
329#[cfg(not(any(
330    target_pointer_width = "16",
331    target_pointer_width = "32",
332    target_pointer_width = "64",
333)))]
334compile_error!(
335    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
336     if you need support for others, \
337     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
338);
339
340#[cfg(portable_atomic_unsafe_assume_single_core)]
341#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(not(portable_atomic_no_atomic_cas)))]
342#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(target_has_atomic = "ptr"))]
343compile_error!(
344    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
345     is not compatible with target that supports atomic CAS;\n\
346     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
347);
348#[cfg(portable_atomic_unsafe_assume_single_core)]
349#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
350#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
351#[cfg(not(any(
352    target_arch = "arm",
353    target_arch = "avr",
354    target_arch = "msp430",
355    target_arch = "riscv32",
356    target_arch = "riscv64",
357    target_arch = "xtensa",
358)))]
359compile_error!(
360    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
361     is not supported yet on this target;\n\
362     if you need unsafe-assume-single-core support for this target,\n\
363     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
364);
365
366#[cfg(portable_atomic_no_outline_atomics)]
367#[cfg(not(any(
368    target_arch = "aarch64",
369    target_arch = "arm",
370    target_arch = "arm64ec",
371    target_arch = "powerpc64",
372    target_arch = "riscv32",
373    target_arch = "riscv64",
374    target_arch = "x86_64",
375)))]
376compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
377#[cfg(portable_atomic_outline_atomics)]
378#[cfg(not(any(
379    target_arch = "aarch64",
380    target_arch = "powerpc64",
381    target_arch = "riscv32",
382    target_arch = "riscv64",
383)))]
384compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
385
386#[cfg(portable_atomic_disable_fiq)]
387#[cfg(not(all(
388    target_arch = "arm",
389    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
390)))]
391compile_error!(
392    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on pre-v6 Arm"
393);
394#[cfg(portable_atomic_s_mode)]
395#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
396compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
397#[cfg(portable_atomic_force_amo)]
398#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
399compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
400
401#[cfg(portable_atomic_disable_fiq)]
402#[cfg(not(portable_atomic_unsafe_assume_single_core))]
403compile_error!(
404    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
405);
406#[cfg(portable_atomic_s_mode)]
407#[cfg(not(portable_atomic_unsafe_assume_single_core))]
408compile_error!(
409    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
410);
411#[cfg(portable_atomic_force_amo)]
412#[cfg(not(portable_atomic_unsafe_assume_single_core))]
413compile_error!(
414    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
415);
416
417#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
418compile_error!(
419    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) at the same time"
420);
421
422#[cfg(feature = "require-cas")]
423#[cfg_attr(
424    portable_atomic_no_cfg_target_has_atomic,
425    cfg(not(any(
426        not(portable_atomic_no_atomic_cas),
427        portable_atomic_unsafe_assume_single_core,
428        feature = "critical-section",
429        target_arch = "avr",
430        target_arch = "msp430",
431    )))
432)]
433#[cfg_attr(
434    not(portable_atomic_no_cfg_target_has_atomic),
435    cfg(not(any(
436        target_has_atomic = "ptr",
437        portable_atomic_unsafe_assume_single_core,
438        feature = "critical-section",
439        target_arch = "avr",
440        target_arch = "msp430",
441    )))
442)]
443compile_error!(
444    "dependents require atomic CAS but not available on this target by default;\n\
445    consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features.\n\
446    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
447);
448
449#[cfg(any(test, feature = "std"))]
450extern crate std;
451
452#[macro_use]
453mod cfgs;
454#[cfg(target_pointer_width = "128")]
455pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
456#[cfg(target_pointer_width = "16")]
457pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
458#[cfg(target_pointer_width = "32")]
459pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
460#[cfg(target_pointer_width = "64")]
461pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
462
463#[macro_use]
464mod utils;
465
466#[cfg(test)]
467#[macro_use]
468mod tests;
469
470#[doc(no_inline)]
471pub use core::sync::atomic::Ordering;
472
473// LLVM doesn't support fence/compiler_fence for MSP430.
474#[cfg(target_arch = "msp430")]
475pub use self::imp::msp430::{compiler_fence, fence};
476#[doc(no_inline)]
477#[cfg(not(target_arch = "msp430"))]
478pub use core::sync::atomic::{compiler_fence, fence};
479
480mod imp;
481
482pub mod hint {
483    //! Re-export of the [`core::hint`] module.
484    //!
485    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
486    //! is available in all rust versions that this crate supports.
487    //!
488    //! ```
489    //! use portable_atomic::hint;
490    //!
491    //! hint::spin_loop();
492    //! ```
493
494    #[doc(no_inline)]
495    pub use core::hint::*;
496
497    /// Emits a machine instruction to signal the processor that it is running in
498    /// a busy-wait spin-loop ("spin lock").
499    ///
500    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
501    /// for example, saving power or switching hyper-threads.
502    ///
503    /// This function is different from [`thread::yield_now`] which directly
504    /// yields to the system's scheduler, whereas `spin_loop` does not interact
505    /// with the operating system.
506    ///
507    /// A common use case for `spin_loop` is implementing bounded optimistic
508    /// spinning in a CAS loop in synchronization primitives. To avoid problems
509    /// like priority inversion, it is strongly recommended that the spin loop is
510    /// terminated after a finite amount of iterations and an appropriate blocking
511    /// syscall is made.
512    ///
513    /// **Note:** On platforms that do not support receiving spin-loop hints this
514    /// function does not do anything at all.
515    ///
516    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
517    #[inline]
518    pub fn spin_loop() {
519        #[allow(deprecated)]
520        core::sync::atomic::spin_loop_hint();
521    }
522}
523
524#[cfg(doc)]
525use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
526use core::{fmt, ptr};
527
528#[cfg(miri)]
529use crate::utils::strict;
530
531cfg_has_atomic_8! {
532/// A boolean type which can be safely shared between threads.
533///
534/// This type has the same in-memory representation as a [`bool`].
535///
536/// If the compiler and the platform support atomic loads and stores of `u8`,
537/// this type is a wrapper for the standard library's
538/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
539/// but the compiler does not, atomic operations are implemented using inline
540/// assembly.
541#[repr(C, align(1))]
542pub struct AtomicBool {
543    v: core::cell::UnsafeCell<u8>,
544}
545
546impl Default for AtomicBool {
547    /// Creates an `AtomicBool` initialized to `false`.
548    #[inline]
549    fn default() -> Self {
550        Self::new(false)
551    }
552}
553
554impl From<bool> for AtomicBool {
555    /// Converts a `bool` into an `AtomicBool`.
556    #[inline]
557    fn from(b: bool) -> Self {
558        Self::new(b)
559    }
560}
561
562// Send is implicitly implemented.
563// SAFETY: any data races are prevented by disabling interrupts or
564// atomic intrinsics (see module-level comments).
565unsafe impl Sync for AtomicBool {}
566
567// UnwindSafe is implicitly implemented.
568#[cfg(not(portable_atomic_no_core_unwind_safe))]
569impl core::panic::RefUnwindSafe for AtomicBool {}
570#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
571impl std::panic::RefUnwindSafe for AtomicBool {}
572
573impl_debug_and_serde!(AtomicBool);
574
575impl AtomicBool {
576    /// Creates a new `AtomicBool`.
577    ///
578    /// # Examples
579    ///
580    /// ```
581    /// use portable_atomic::AtomicBool;
582    ///
583    /// let atomic_true = AtomicBool::new(true);
584    /// let atomic_false = AtomicBool::new(false);
585    /// ```
586    #[inline]
587    #[must_use]
588    pub const fn new(v: bool) -> Self {
589        static_assert_layout!(AtomicBool, bool);
590        Self { v: core::cell::UnsafeCell::new(v as u8) }
591    }
592
593    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
594    const_fn! {
595        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
596        /// Creates a new `AtomicBool` from a pointer.
597        ///
598        /// This is `const fn` on Rust 1.83+.
599        ///
600        /// # Safety
601        ///
602        /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
603        ///   be bigger than `align_of::<bool>()`).
604        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
605        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
606        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
607        ///   value (or vice-versa).
608        ///   * In other words, time periods where the value is accessed atomically may not overlap
609        ///     with periods where the value is accessed non-atomically.
610        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
611        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
612        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
613        ///     from the same thread.
614        /// * If this atomic type is *not* lock-free:
615        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
616        ///     with accesses via the returned value (or vice-versa).
617        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
618        ///     be compatible with operations performed by this atomic type.
619        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
620        ///   these are not supported by the memory model.
621        ///
622        /// [valid]: core::ptr#safety
623        #[inline]
624        #[must_use]
625        pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
626            #[allow(clippy::cast_ptr_alignment)]
627            // SAFETY: guaranteed by the caller
628            unsafe { &*(ptr as *mut Self) }
629        }
630    }
631
632    /// Returns `true` if operations on values of this type are lock-free.
633    ///
634    /// If the compiler or the platform doesn't support the necessary
635    /// atomic instructions, global locks for every potentially
636    /// concurrent atomic operation will be used.
637    ///
638    /// # Examples
639    ///
640    /// ```
641    /// use portable_atomic::AtomicBool;
642    ///
643    /// let is_lock_free = AtomicBool::is_lock_free();
644    /// ```
645    #[inline]
646    #[must_use]
647    pub fn is_lock_free() -> bool {
648        imp::AtomicU8::is_lock_free()
649    }
650
651    /// Returns `true` if operations on values of this type are lock-free.
652    ///
653    /// If the compiler or the platform doesn't support the necessary
654    /// atomic instructions, global locks for every potentially
655    /// concurrent atomic operation will be used.
656    ///
657    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
658    /// this type may be lock-free even if the function returns false.
659    ///
660    /// # Examples
661    ///
662    /// ```
663    /// use portable_atomic::AtomicBool;
664    ///
665    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
666    /// ```
667    #[inline]
668    #[must_use]
669    pub const fn is_always_lock_free() -> bool {
670        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
671    }
672    #[cfg(test)]
673    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
674
675    const_fn! {
676        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
677        /// Returns a mutable reference to the underlying [`bool`].
678        ///
679        /// This is safe because the mutable reference guarantees that no other threads are
680        /// concurrently accessing the atomic data.
681        ///
682        /// This is `const fn` on Rust 1.83+.
683        ///
684        /// # Examples
685        ///
686        /// ```
687        /// use portable_atomic::{AtomicBool, Ordering};
688        ///
689        /// let mut some_bool = AtomicBool::new(true);
690        /// assert_eq!(*some_bool.get_mut(), true);
691        /// *some_bool.get_mut() = false;
692        /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
693        /// ```
694        #[inline]
695        pub const fn get_mut(&mut self) -> &mut bool {
696            // SAFETY: the mutable reference guarantees unique ownership.
697            unsafe { &mut *self.as_ptr() }
698        }
699    }
700
701    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
702    // https://github.com/rust-lang/rust/issues/76314
703
704    const_fn! {
705        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
706        /// Consumes the atomic and returns the contained value.
707        ///
708        /// This is safe because passing `self` by value guarantees that no other threads are
709        /// concurrently accessing the atomic data.
710        ///
711        /// This is `const fn` on Rust 1.56+.
712        ///
713        /// # Examples
714        ///
715        /// ```
716        /// use portable_atomic::AtomicBool;
717        ///
718        /// let some_bool = AtomicBool::new(true);
719        /// assert_eq!(some_bool.into_inner(), true);
720        /// ```
721        #[inline]
722        pub const fn into_inner(self) -> bool {
723            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
724            // so they can be safely transmuted.
725            // (const UnsafeCell::into_inner is unstable)
726            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
727        }
728    }
729
730    /// Loads a value from the bool.
731    ///
732    /// `load` takes an [`Ordering`] argument which describes the memory ordering
733    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
734    ///
735    /// # Panics
736    ///
737    /// Panics if `order` is [`Release`] or [`AcqRel`].
738    ///
739    /// # Examples
740    ///
741    /// ```
742    /// use portable_atomic::{AtomicBool, Ordering};
743    ///
744    /// let some_bool = AtomicBool::new(true);
745    ///
746    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
747    /// ```
748    #[inline]
749    #[cfg_attr(
750        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
751        track_caller
752    )]
753    pub fn load(&self, order: Ordering) -> bool {
754        self.as_atomic_u8().load(order) != 0
755    }
756
757    /// Stores a value into the bool.
758    ///
759    /// `store` takes an [`Ordering`] argument which describes the memory ordering
760    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
761    ///
762    /// # Panics
763    ///
764    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
765    ///
766    /// # Examples
767    ///
768    /// ```
769    /// use portable_atomic::{AtomicBool, Ordering};
770    ///
771    /// let some_bool = AtomicBool::new(true);
772    ///
773    /// some_bool.store(false, Ordering::Relaxed);
774    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
775    /// ```
776    #[inline]
777    #[cfg_attr(
778        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
779        track_caller
780    )]
781    pub fn store(&self, val: bool, order: Ordering) {
782        self.as_atomic_u8().store(val as u8, order);
783    }
784
785    cfg_has_atomic_cas_or_amo32! {
786    /// Stores a value into the bool, returning the previous value.
787    ///
788    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
789    /// of this operation. All ordering modes are possible. Note that using
790    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
791    /// using [`Release`] makes the load part [`Relaxed`].
792    ///
793    /// # Examples
794    ///
795    /// ```
796    /// use portable_atomic::{AtomicBool, Ordering};
797    ///
798    /// let some_bool = AtomicBool::new(true);
799    ///
800    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
801    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
802    /// ```
803    #[inline]
804    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
805    pub fn swap(&self, val: bool, order: Ordering) -> bool {
806        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
807        {
808            // See https://github.com/rust-lang/rust/pull/114034 for details.
809            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
810            // https://godbolt.org/z/Enh87Ph9b
811            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
812        }
813        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
814        {
815            self.as_atomic_u8().swap(val as u8, order) != 0
816        }
817    }
818
819    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
820    ///
821    /// The return value is a result indicating whether the new value was written and containing
822    /// the previous value. On success this value is guaranteed to be equal to `current`.
823    ///
824    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
825    /// ordering of this operation. `success` describes the required ordering for the
826    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
827    /// `failure` describes the required ordering for the load operation that takes place when
828    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
829    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
830    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
831    ///
832    /// # Panics
833    ///
834    /// Panics if `failure` is [`Release`], [`AcqRel`].
835    ///
836    /// # Examples
837    ///
838    /// ```
839    /// use portable_atomic::{AtomicBool, Ordering};
840    ///
841    /// let some_bool = AtomicBool::new(true);
842    ///
843    /// assert_eq!(
844    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
845    ///     Ok(true)
846    /// );
847    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
848    ///
849    /// assert_eq!(
850    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
851    ///     Err(false)
852    /// );
853    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
854    /// ```
855    #[inline]
856    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
857    #[cfg_attr(
858        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
859        track_caller
860    )]
861    pub fn compare_exchange(
862        &self,
863        current: bool,
864        new: bool,
865        success: Ordering,
866        failure: Ordering,
867    ) -> Result<bool, bool> {
868        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
869        {
870            // See https://github.com/rust-lang/rust/pull/114034 for details.
871            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
872            // https://godbolt.org/z/Enh87Ph9b
873            crate::utils::assert_compare_exchange_ordering(success, failure);
874            let order = crate::utils::upgrade_success_ordering(success, failure);
875            let old = if current == new {
876                // This is a no-op, but we still need to perform the operation
877                // for memory ordering reasons.
878                self.fetch_or(false, order)
879            } else {
880                // This sets the value to the new one and returns the old one.
881                self.swap(new, order)
882            };
883            if old == current { Ok(old) } else { Err(old) }
884        }
885        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
886        {
887            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
888                Ok(x) => Ok(x != 0),
889                Err(x) => Err(x != 0),
890            }
891        }
892    }
893
894    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
895    ///
896    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
897    /// comparison succeeds, which can result in more efficient code on some platforms. The
898    /// return value is a result indicating whether the new value was written and containing the
899    /// previous value.
900    ///
901    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
902    /// ordering of this operation. `success` describes the required ordering for the
903    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
904    /// `failure` describes the required ordering for the load operation that takes place when
905    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
906    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
907    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
908    ///
909    /// # Panics
910    ///
911    /// Panics if `failure` is [`Release`], [`AcqRel`].
912    ///
913    /// # Examples
914    ///
915    /// ```
916    /// use portable_atomic::{AtomicBool, Ordering};
917    ///
918    /// let val = AtomicBool::new(false);
919    ///
920    /// let new = true;
921    /// let mut old = val.load(Ordering::Relaxed);
922    /// loop {
923    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
924    ///         Ok(_) => break,
925    ///         Err(x) => old = x,
926    ///     }
927    /// }
928    /// ```
929    #[inline]
930    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
931    #[cfg_attr(
932        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
933        track_caller
934    )]
935    pub fn compare_exchange_weak(
936        &self,
937        current: bool,
938        new: bool,
939        success: Ordering,
940        failure: Ordering,
941    ) -> Result<bool, bool> {
942        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
943        {
944            // See https://github.com/rust-lang/rust/pull/114034 for details.
945            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
946            // https://godbolt.org/z/Enh87Ph9b
947            self.compare_exchange(current, new, success, failure)
948        }
949        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
950        {
951            match self
952                .as_atomic_u8()
953                .compare_exchange_weak(current as u8, new as u8, success, failure)
954            {
955                Ok(x) => Ok(x != 0),
956                Err(x) => Err(x != 0),
957            }
958        }
959    }
960
961    /// Logical "and" with a boolean value.
962    ///
963    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
964    /// the new value to the result.
965    ///
966    /// Returns the previous value.
967    ///
968    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
969    /// of this operation. All ordering modes are possible. Note that using
970    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
971    /// using [`Release`] makes the load part [`Relaxed`].
972    ///
973    /// # Examples
974    ///
975    /// ```
976    /// use portable_atomic::{AtomicBool, Ordering};
977    ///
978    /// let foo = AtomicBool::new(true);
979    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
980    /// assert_eq!(foo.load(Ordering::SeqCst), false);
981    ///
982    /// let foo = AtomicBool::new(true);
983    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
984    /// assert_eq!(foo.load(Ordering::SeqCst), true);
985    ///
986    /// let foo = AtomicBool::new(false);
987    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
988    /// assert_eq!(foo.load(Ordering::SeqCst), false);
989    /// ```
990    #[inline]
991    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
992    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
993        self.as_atomic_u8().fetch_and(val as u8, order) != 0
994    }
995
996    /// Logical "and" with a boolean value.
997    ///
998    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
999    /// the new value to the result.
1000    ///
1001    /// Unlike `fetch_and`, this does not return the previous value.
1002    ///
1003    /// `and` takes an [`Ordering`] argument which describes the memory ordering
1004    /// of this operation. All ordering modes are possible. Note that using
1005    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1006    /// using [`Release`] makes the load part [`Relaxed`].
1007    ///
1008    /// This function may generate more efficient code than `fetch_and` on some platforms.
1009    ///
1010    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1011    /// - MSP430: `and` instead of disabling interrupts
1012    ///
1013    /// Note: On x86/x86_64, the use of either function should not usually
1014    /// affect the generated code, because LLVM can properly optimize the case
1015    /// where the result is unused.
1016    ///
1017    /// # Examples
1018    ///
1019    /// ```
1020    /// use portable_atomic::{AtomicBool, Ordering};
1021    ///
1022    /// let foo = AtomicBool::new(true);
1023    /// foo.and(false, Ordering::SeqCst);
1024    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1025    ///
1026    /// let foo = AtomicBool::new(true);
1027    /// foo.and(true, Ordering::SeqCst);
1028    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1029    ///
1030    /// let foo = AtomicBool::new(false);
1031    /// foo.and(false, Ordering::SeqCst);
1032    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1033    /// ```
1034    #[inline]
1035    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1036    pub fn and(&self, val: bool, order: Ordering) {
1037        self.as_atomic_u8().and(val as u8, order);
1038    }
1039
1040    /// Logical "nand" with a boolean value.
1041    ///
1042    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1043    /// the new value to the result.
1044    ///
1045    /// Returns the previous value.
1046    ///
1047    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1048    /// of this operation. All ordering modes are possible. Note that using
1049    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1050    /// using [`Release`] makes the load part [`Relaxed`].
1051    ///
1052    /// # Examples
1053    ///
1054    /// ```
1055    /// use portable_atomic::{AtomicBool, Ordering};
1056    ///
1057    /// let foo = AtomicBool::new(true);
1058    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1059    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1060    ///
1061    /// let foo = AtomicBool::new(true);
1062    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1063    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1064    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1065    ///
1066    /// let foo = AtomicBool::new(false);
1067    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1068    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1069    /// ```
1070    #[inline]
1071    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1072    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1073        // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L956-L970
1074        if val {
1075            // !(x & true) == !x
1076            // We must invert the bool.
1077            self.fetch_xor(true, order)
1078        } else {
1079            // !(x & false) == true
1080            // We must set the bool to true.
1081            self.swap(true, order)
1082        }
1083    }
1084
1085    /// Logical "or" with a boolean value.
1086    ///
1087    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1088    /// new value to the result.
1089    ///
1090    /// Returns the previous value.
1091    ///
1092    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1093    /// of this operation. All ordering modes are possible. Note that using
1094    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1095    /// using [`Release`] makes the load part [`Relaxed`].
1096    ///
1097    /// # Examples
1098    ///
1099    /// ```
1100    /// use portable_atomic::{AtomicBool, Ordering};
1101    ///
1102    /// let foo = AtomicBool::new(true);
1103    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1104    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1105    ///
1106    /// let foo = AtomicBool::new(true);
1107    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1108    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1109    ///
1110    /// let foo = AtomicBool::new(false);
1111    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1112    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1113    /// ```
1114    #[inline]
1115    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1116    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1117        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1118    }
1119
1120    /// Logical "or" with a boolean value.
1121    ///
1122    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1123    /// new value to the result.
1124    ///
1125    /// Unlike `fetch_or`, this does not return the previous value.
1126    ///
1127    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1128    /// of this operation. All ordering modes are possible. Note that using
1129    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1130    /// using [`Release`] makes the load part [`Relaxed`].
1131    ///
1132    /// This function may generate more efficient code than `fetch_or` on some platforms.
1133    ///
1134    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1135    /// - MSP430: `bis` instead of disabling interrupts
1136    ///
1137    /// Note: On x86/x86_64, the use of either function should not usually
1138    /// affect the generated code, because LLVM can properly optimize the case
1139    /// where the result is unused.
1140    ///
1141    /// # Examples
1142    ///
1143    /// ```
1144    /// use portable_atomic::{AtomicBool, Ordering};
1145    ///
1146    /// let foo = AtomicBool::new(true);
1147    /// foo.or(false, Ordering::SeqCst);
1148    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1149    ///
1150    /// let foo = AtomicBool::new(true);
1151    /// foo.or(true, Ordering::SeqCst);
1152    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1153    ///
1154    /// let foo = AtomicBool::new(false);
1155    /// foo.or(false, Ordering::SeqCst);
1156    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1157    /// ```
1158    #[inline]
1159    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1160    pub fn or(&self, val: bool, order: Ordering) {
1161        self.as_atomic_u8().or(val as u8, order);
1162    }
1163
1164    /// Logical "xor" with a boolean value.
1165    ///
1166    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1167    /// the new value to the result.
1168    ///
1169    /// Returns the previous value.
1170    ///
1171    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1172    /// of this operation. All ordering modes are possible. Note that using
1173    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1174    /// using [`Release`] makes the load part [`Relaxed`].
1175    ///
1176    /// # Examples
1177    ///
1178    /// ```
1179    /// use portable_atomic::{AtomicBool, Ordering};
1180    ///
1181    /// let foo = AtomicBool::new(true);
1182    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1183    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1184    ///
1185    /// let foo = AtomicBool::new(true);
1186    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1187    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1188    ///
1189    /// let foo = AtomicBool::new(false);
1190    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1191    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1192    /// ```
1193    #[inline]
1194    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1195    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1196        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1197    }
1198
1199    /// Logical "xor" with a boolean value.
1200    ///
1201    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1202    /// the new value to the result.
1203    ///
1204    /// Unlike `fetch_xor`, this does not return the previous value.
1205    ///
1206    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1207    /// of this operation. All ordering modes are possible. Note that using
1208    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1209    /// using [`Release`] makes the load part [`Relaxed`].
1210    ///
1211    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1212    ///
1213    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1214    /// - MSP430: `xor` instead of disabling interrupts
1215    ///
1216    /// Note: On x86/x86_64, the use of either function should not usually
1217    /// affect the generated code, because LLVM can properly optimize the case
1218    /// where the result is unused.
1219    ///
1220    /// # Examples
1221    ///
1222    /// ```
1223    /// use portable_atomic::{AtomicBool, Ordering};
1224    ///
1225    /// let foo = AtomicBool::new(true);
1226    /// foo.xor(false, Ordering::SeqCst);
1227    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1228    ///
1229    /// let foo = AtomicBool::new(true);
1230    /// foo.xor(true, Ordering::SeqCst);
1231    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1232    ///
1233    /// let foo = AtomicBool::new(false);
1234    /// foo.xor(false, Ordering::SeqCst);
1235    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1236    /// ```
1237    #[inline]
1238    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1239    pub fn xor(&self, val: bool, order: Ordering) {
1240        self.as_atomic_u8().xor(val as u8, order);
1241    }
1242
1243    /// Logical "not" with a boolean value.
1244    ///
1245    /// Performs a logical "not" operation on the current value, and sets
1246    /// the new value to the result.
1247    ///
1248    /// Returns the previous value.
1249    ///
1250    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1251    /// of this operation. All ordering modes are possible. Note that using
1252    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1253    /// using [`Release`] makes the load part [`Relaxed`].
1254    ///
1255    /// # Examples
1256    ///
1257    /// ```
1258    /// use portable_atomic::{AtomicBool, Ordering};
1259    ///
1260    /// let foo = AtomicBool::new(true);
1261    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1262    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1263    ///
1264    /// let foo = AtomicBool::new(false);
1265    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1266    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1267    /// ```
1268    #[inline]
1269    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1270    pub fn fetch_not(&self, order: Ordering) -> bool {
1271        self.fetch_xor(true, order)
1272    }
1273
1274    /// Logical "not" with a boolean value.
1275    ///
1276    /// Performs a logical "not" operation on the current value, and sets
1277    /// the new value to the result.
1278    ///
1279    /// Unlike `fetch_not`, this does not return the previous value.
1280    ///
1281    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1282    /// of this operation. All ordering modes are possible. Note that using
1283    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1284    /// using [`Release`] makes the load part [`Relaxed`].
1285    ///
1286    /// This function may generate more efficient code than `fetch_not` on some platforms.
1287    ///
1288    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1289    /// - MSP430: `xor` instead of disabling interrupts
1290    ///
1291    /// Note: On x86/x86_64, the use of either function should not usually
1292    /// affect the generated code, because LLVM can properly optimize the case
1293    /// where the result is unused.
1294    ///
1295    /// # Examples
1296    ///
1297    /// ```
1298    /// use portable_atomic::{AtomicBool, Ordering};
1299    ///
1300    /// let foo = AtomicBool::new(true);
1301    /// foo.not(Ordering::SeqCst);
1302    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1303    ///
1304    /// let foo = AtomicBool::new(false);
1305    /// foo.not(Ordering::SeqCst);
1306    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1307    /// ```
1308    #[inline]
1309    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1310    pub fn not(&self, order: Ordering) {
1311        self.xor(true, order);
1312    }
1313
1314    /// Fetches the value, and applies a function to it that returns an optional
1315    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1316    /// returned `Some(_)`, else `Err(previous_value)`.
1317    ///
1318    /// Note: This may call the function multiple times if the value has been
1319    /// changed from other threads in the meantime, as long as the function
1320    /// returns `Some(_)`, but the function will have been applied only once to
1321    /// the stored value.
1322    ///
1323    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1324    /// ordering of this operation. The first describes the required ordering for
1325    /// when the operation finally succeeds while the second describes the
1326    /// required ordering for loads. These correspond to the success and failure
1327    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1328    ///
1329    /// Using [`Acquire`] as success ordering makes the store part of this
1330    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1331    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1332    /// [`Acquire`] or [`Relaxed`].
1333    ///
1334    /// # Considerations
1335    ///
1336    /// This method is not magic; it is not provided by the hardware.
1337    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1338    /// and suffers from the same drawbacks.
1339    /// In particular, this method will not circumvent the [ABA Problem].
1340    ///
1341    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1342    ///
1343    /// # Panics
1344    ///
1345    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1346    ///
1347    /// # Examples
1348    ///
1349    /// ```
1350    /// use portable_atomic::{AtomicBool, Ordering};
1351    ///
1352    /// let x = AtomicBool::new(false);
1353    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1354    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1355    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1356    /// assert_eq!(x.load(Ordering::SeqCst), false);
1357    /// ```
1358    #[inline]
1359    #[cfg_attr(
1360        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1361        track_caller
1362    )]
1363    pub fn fetch_update<F>(
1364        &self,
1365        set_order: Ordering,
1366        fetch_order: Ordering,
1367        mut f: F,
1368    ) -> Result<bool, bool>
1369    where
1370        F: FnMut(bool) -> Option<bool>,
1371    {
1372        let mut prev = self.load(fetch_order);
1373        while let Some(next) = f(prev) {
1374            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1375                x @ Ok(_) => return x,
1376                Err(next_prev) => prev = next_prev,
1377            }
1378        }
1379        Err(prev)
1380    }
1381    } // cfg_has_atomic_cas_or_amo32!
1382
1383    const_fn! {
1384        // This function is actually `const fn`-compatible on Rust 1.32+,
1385        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1386        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1387        /// Returns a mutable pointer to the underlying [`bool`].
1388        ///
1389        /// Returning an `*mut` pointer from a shared reference to this atomic is
1390        /// safe because the atomic types work with interior mutability. Any use of
1391        /// the returned raw pointer requires an `unsafe` block and has to uphold
1392        /// the safety requirements. If there is concurrent access, note the following
1393        /// additional safety requirements:
1394        ///
1395        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1396        ///   operations on it must be atomic.
1397        /// - Otherwise, any concurrent operations on it must be compatible with
1398        ///   operations performed by this atomic type.
1399        ///
1400        /// This is `const fn` on Rust 1.58+.
1401        #[inline]
1402        pub const fn as_ptr(&self) -> *mut bool {
1403            self.v.get() as *mut bool
1404        }
1405    }
1406
1407    #[inline(always)]
1408    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1409        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1410        // and both access data in the same way.
1411        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1412    }
1413}
1414// See https://github.com/taiki-e/portable-atomic/issues/180
1415#[cfg(not(feature = "require-cas"))]
1416cfg_no_atomic_cas! {
1417#[doc(hidden)]
1418#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1419impl<'a> AtomicBool {
1420    cfg_no_atomic_cas_or_amo32! {
1421    #[inline]
1422    pub fn swap(&self, val: bool, order: Ordering) -> bool
1423    where
1424        &'a Self: HasSwap,
1425    {
1426        unimplemented!()
1427    }
1428    #[inline]
1429    pub fn compare_exchange(
1430        &self,
1431        current: bool,
1432        new: bool,
1433        success: Ordering,
1434        failure: Ordering,
1435    ) -> Result<bool, bool>
1436    where
1437        &'a Self: HasCompareExchange,
1438    {
1439        unimplemented!()
1440    }
1441    #[inline]
1442    pub fn compare_exchange_weak(
1443        &self,
1444        current: bool,
1445        new: bool,
1446        success: Ordering,
1447        failure: Ordering,
1448    ) -> Result<bool, bool>
1449    where
1450        &'a Self: HasCompareExchangeWeak,
1451    {
1452        unimplemented!()
1453    }
1454    #[inline]
1455    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1456    where
1457        &'a Self: HasFetchAnd,
1458    {
1459        unimplemented!()
1460    }
1461    #[inline]
1462    pub fn and(&self, val: bool, order: Ordering)
1463    where
1464        &'a Self: HasAnd,
1465    {
1466        unimplemented!()
1467    }
1468    #[inline]
1469    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1470    where
1471        &'a Self: HasFetchNand,
1472    {
1473        unimplemented!()
1474    }
1475    #[inline]
1476    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1477    where
1478        &'a Self: HasFetchOr,
1479    {
1480        unimplemented!()
1481    }
1482    #[inline]
1483    pub fn or(&self, val: bool, order: Ordering)
1484    where
1485        &'a Self: HasOr,
1486    {
1487        unimplemented!()
1488    }
1489    #[inline]
1490    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1491    where
1492        &'a Self: HasFetchXor,
1493    {
1494        unimplemented!()
1495    }
1496    #[inline]
1497    pub fn xor(&self, val: bool, order: Ordering)
1498    where
1499        &'a Self: HasXor,
1500    {
1501        unimplemented!()
1502    }
1503    #[inline]
1504    pub fn fetch_not(&self, order: Ordering) -> bool
1505    where
1506        &'a Self: HasFetchNot,
1507    {
1508        unimplemented!()
1509    }
1510    #[inline]
1511    pub fn not(&self, order: Ordering)
1512    where
1513        &'a Self: HasNot,
1514    {
1515        unimplemented!()
1516    }
1517    #[inline]
1518    pub fn fetch_update<F>(
1519        &self,
1520        set_order: Ordering,
1521        fetch_order: Ordering,
1522        f: F,
1523    ) -> Result<bool, bool>
1524    where
1525        F: FnMut(bool) -> Option<bool>,
1526        &'a Self: HasFetchUpdate,
1527    {
1528        unimplemented!()
1529    }
1530    } // cfg_no_atomic_cas_or_amo32!
1531}
1532} // cfg_no_atomic_cas!
1533} // cfg_has_atomic_8!
1534
1535cfg_has_atomic_ptr! {
1536/// A raw pointer type which can be safely shared between threads.
1537///
1538/// This type has the same in-memory representation as a `*mut T`.
1539///
1540/// If the compiler and the platform support atomic loads and stores of pointers,
1541/// this type is a wrapper for the standard library's
1542/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1543/// but the compiler does not, atomic operations are implemented using inline
1544/// assembly.
1545// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1546// will show clearer docs.
1547#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1548#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1549#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1550#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1551pub struct AtomicPtr<T> {
1552    inner: imp::AtomicPtr<T>,
1553}
1554
1555impl<T> Default for AtomicPtr<T> {
1556    /// Creates a null `AtomicPtr<T>`.
1557    #[inline]
1558    fn default() -> Self {
1559        Self::new(ptr::null_mut())
1560    }
1561}
1562
1563impl<T> From<*mut T> for AtomicPtr<T> {
1564    #[inline]
1565    fn from(p: *mut T) -> Self {
1566        Self::new(p)
1567    }
1568}
1569
1570impl<T> fmt::Debug for AtomicPtr<T> {
1571    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1572    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1573        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L2166
1574        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1575    }
1576}
1577
1578impl<T> fmt::Pointer for AtomicPtr<T> {
1579    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1580    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1581        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L2166
1582        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1583    }
1584}
1585
1586// UnwindSafe is implicitly implemented.
1587#[cfg(not(portable_atomic_no_core_unwind_safe))]
1588impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1589#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1590impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1591
1592impl<T> AtomicPtr<T> {
1593    /// Creates a new `AtomicPtr`.
1594    ///
1595    /// # Examples
1596    ///
1597    /// ```
1598    /// use portable_atomic::AtomicPtr;
1599    ///
1600    /// let ptr = &mut 5;
1601    /// let atomic_ptr = AtomicPtr::new(ptr);
1602    /// ```
1603    #[inline]
1604    #[must_use]
1605    pub const fn new(p: *mut T) -> Self {
1606        static_assert_layout!(AtomicPtr<()>, *mut ());
1607        Self { inner: imp::AtomicPtr::new(p) }
1608    }
1609
1610    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1611    const_fn! {
1612        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1613        /// Creates a new `AtomicPtr` from a pointer.
1614        ///
1615        /// This is `const fn` on Rust 1.83+.
1616        ///
1617        /// # Safety
1618        ///
1619        /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1620        ///   can be bigger than `align_of::<*mut T>()`).
1621        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1622        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1623        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1624        ///   value (or vice-versa).
1625        ///   * In other words, time periods where the value is accessed atomically may not overlap
1626        ///     with periods where the value is accessed non-atomically.
1627        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1628        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1629        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1630        ///     from the same thread.
1631        /// * If this atomic type is *not* lock-free:
1632        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1633        ///     with accesses via the returned value (or vice-versa).
1634        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1635        ///     be compatible with operations performed by this atomic type.
1636        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1637        ///   these are not supported by the memory model.
1638        ///
1639        /// [valid]: core::ptr#safety
1640        #[inline]
1641        #[must_use]
1642        pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1643            #[allow(clippy::cast_ptr_alignment)]
1644            // SAFETY: guaranteed by the caller
1645            unsafe { &*(ptr as *mut Self) }
1646        }
1647    }
1648
1649    /// Returns `true` if operations on values of this type are lock-free.
1650    ///
1651    /// If the compiler or the platform doesn't support the necessary
1652    /// atomic instructions, global locks for every potentially
1653    /// concurrent atomic operation will be used.
1654    ///
1655    /// # Examples
1656    ///
1657    /// ```
1658    /// use portable_atomic::AtomicPtr;
1659    ///
1660    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1661    /// ```
1662    #[inline]
1663    #[must_use]
1664    pub fn is_lock_free() -> bool {
1665        <imp::AtomicPtr<T>>::is_lock_free()
1666    }
1667
1668    /// Returns `true` if operations on values of this type are lock-free.
1669    ///
1670    /// If the compiler or the platform doesn't support the necessary
1671    /// atomic instructions, global locks for every potentially
1672    /// concurrent atomic operation will be used.
1673    ///
1674    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1675    /// this type may be lock-free even if the function returns false.
1676    ///
1677    /// # Examples
1678    ///
1679    /// ```
1680    /// use portable_atomic::AtomicPtr;
1681    ///
1682    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1683    /// ```
1684    #[inline]
1685    #[must_use]
1686    pub const fn is_always_lock_free() -> bool {
1687        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1688    }
1689    #[cfg(test)]
1690    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1691
1692    const_fn! {
1693        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1694        /// Returns a mutable reference to the underlying pointer.
1695        ///
1696        /// This is safe because the mutable reference guarantees that no other threads are
1697        /// concurrently accessing the atomic data.
1698        ///
1699        /// This is `const fn` on Rust 1.83+.
1700        ///
1701        /// # Examples
1702        ///
1703        /// ```
1704        /// use portable_atomic::{AtomicPtr, Ordering};
1705        ///
1706        /// let mut data = 10;
1707        /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1708        /// let mut other_data = 5;
1709        /// *atomic_ptr.get_mut() = &mut other_data;
1710        /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1711        /// ```
1712        #[inline]
1713        pub const fn get_mut(&mut self) -> &mut *mut T {
1714            // SAFETY: the mutable reference guarantees unique ownership.
1715            // (core::sync::atomic::Atomic*::get_mut is not const yet)
1716            unsafe { &mut *self.as_ptr() }
1717        }
1718    }
1719
1720    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1721    // https://github.com/rust-lang/rust/issues/76314
1722
1723    const_fn! {
1724        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1725        /// Consumes the atomic and returns the contained value.
1726        ///
1727        /// This is safe because passing `self` by value guarantees that no other threads are
1728        /// concurrently accessing the atomic data.
1729        ///
1730        /// This is `const fn` on Rust 1.56+.
1731        ///
1732        /// # Examples
1733        ///
1734        /// ```
1735        /// use portable_atomic::AtomicPtr;
1736        ///
1737        /// let mut data = 5;
1738        /// let atomic_ptr = AtomicPtr::new(&mut data);
1739        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1740        /// ```
1741        #[inline]
1742        pub const fn into_inner(self) -> *mut T {
1743            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1744            // so they can be safely transmuted.
1745            // (const UnsafeCell::into_inner is unstable)
1746            unsafe { core::mem::transmute(self) }
1747        }
1748    }
1749
1750    /// Loads a value from the pointer.
1751    ///
1752    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1753    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1754    ///
1755    /// # Panics
1756    ///
1757    /// Panics if `order` is [`Release`] or [`AcqRel`].
1758    ///
1759    /// # Examples
1760    ///
1761    /// ```
1762    /// use portable_atomic::{AtomicPtr, Ordering};
1763    ///
1764    /// let ptr = &mut 5;
1765    /// let some_ptr = AtomicPtr::new(ptr);
1766    ///
1767    /// let value = some_ptr.load(Ordering::Relaxed);
1768    /// ```
1769    #[inline]
1770    #[cfg_attr(
1771        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1772        track_caller
1773    )]
1774    pub fn load(&self, order: Ordering) -> *mut T {
1775        self.inner.load(order)
1776    }
1777
1778    /// Stores a value into the pointer.
1779    ///
1780    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1781    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1782    ///
1783    /// # Panics
1784    ///
1785    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1786    ///
1787    /// # Examples
1788    ///
1789    /// ```
1790    /// use portable_atomic::{AtomicPtr, Ordering};
1791    ///
1792    /// let ptr = &mut 5;
1793    /// let some_ptr = AtomicPtr::new(ptr);
1794    ///
1795    /// let other_ptr = &mut 10;
1796    ///
1797    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1798    /// ```
1799    #[inline]
1800    #[cfg_attr(
1801        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1802        track_caller
1803    )]
1804    pub fn store(&self, ptr: *mut T, order: Ordering) {
1805        self.inner.store(ptr, order);
1806    }
1807
1808    cfg_has_atomic_cas_or_amo32! {
1809    /// Stores a value into the pointer, returning the previous value.
1810    ///
1811    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1812    /// of this operation. All ordering modes are possible. Note that using
1813    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1814    /// using [`Release`] makes the load part [`Relaxed`].
1815    ///
1816    /// # Examples
1817    ///
1818    /// ```
1819    /// use portable_atomic::{AtomicPtr, Ordering};
1820    ///
1821    /// let ptr = &mut 5;
1822    /// let some_ptr = AtomicPtr::new(ptr);
1823    ///
1824    /// let other_ptr = &mut 10;
1825    ///
1826    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1827    /// ```
1828    #[inline]
1829    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1830    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1831        self.inner.swap(ptr, order)
1832    }
1833
1834    cfg_has_atomic_cas! {
1835    /// Stores a value into the pointer if the current value is the same as the `current` value.
1836    ///
1837    /// The return value is a result indicating whether the new value was written and containing
1838    /// the previous value. On success this value is guaranteed to be equal to `current`.
1839    ///
1840    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1841    /// ordering of this operation. `success` describes the required ordering for the
1842    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1843    /// `failure` describes the required ordering for the load operation that takes place when
1844    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1845    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1846    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1847    ///
1848    /// # Panics
1849    ///
1850    /// Panics if `failure` is [`Release`], [`AcqRel`].
1851    ///
1852    /// # Examples
1853    ///
1854    /// ```
1855    /// use portable_atomic::{AtomicPtr, Ordering};
1856    ///
1857    /// let ptr = &mut 5;
1858    /// let some_ptr = AtomicPtr::new(ptr);
1859    ///
1860    /// let other_ptr = &mut 10;
1861    ///
1862    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1863    /// ```
1864    #[inline]
1865    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1866    #[cfg_attr(
1867        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1868        track_caller
1869    )]
1870    pub fn compare_exchange(
1871        &self,
1872        current: *mut T,
1873        new: *mut T,
1874        success: Ordering,
1875        failure: Ordering,
1876    ) -> Result<*mut T, *mut T> {
1877        self.inner.compare_exchange(current, new, success, failure)
1878    }
1879
1880    /// Stores a value into the pointer if the current value is the same as the `current` value.
1881    ///
1882    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1883    /// comparison succeeds, which can result in more efficient code on some platforms. The
1884    /// return value is a result indicating whether the new value was written and containing the
1885    /// previous value.
1886    ///
1887    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1888    /// ordering of this operation. `success` describes the required ordering for the
1889    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1890    /// `failure` describes the required ordering for the load operation that takes place when
1891    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1892    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1893    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1894    ///
1895    /// # Panics
1896    ///
1897    /// Panics if `failure` is [`Release`], [`AcqRel`].
1898    ///
1899    /// # Examples
1900    ///
1901    /// ```
1902    /// use portable_atomic::{AtomicPtr, Ordering};
1903    ///
1904    /// let some_ptr = AtomicPtr::new(&mut 5);
1905    ///
1906    /// let new = &mut 10;
1907    /// let mut old = some_ptr.load(Ordering::Relaxed);
1908    /// loop {
1909    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1910    ///         Ok(_) => break,
1911    ///         Err(x) => old = x,
1912    ///     }
1913    /// }
1914    /// ```
1915    #[inline]
1916    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1917    #[cfg_attr(
1918        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1919        track_caller
1920    )]
1921    pub fn compare_exchange_weak(
1922        &self,
1923        current: *mut T,
1924        new: *mut T,
1925        success: Ordering,
1926        failure: Ordering,
1927    ) -> Result<*mut T, *mut T> {
1928        self.inner.compare_exchange_weak(current, new, success, failure)
1929    }
1930
1931    /// Fetches the value, and applies a function to it that returns an optional
1932    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1933    /// returned `Some(_)`, else `Err(previous_value)`.
1934    ///
1935    /// Note: This may call the function multiple times if the value has been
1936    /// changed from other threads in the meantime, as long as the function
1937    /// returns `Some(_)`, but the function will have been applied only once to
1938    /// the stored value.
1939    ///
1940    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1941    /// ordering of this operation. The first describes the required ordering for
1942    /// when the operation finally succeeds while the second describes the
1943    /// required ordering for loads. These correspond to the success and failure
1944    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1945    ///
1946    /// Using [`Acquire`] as success ordering makes the store part of this
1947    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1948    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1949    /// [`Acquire`] or [`Relaxed`].
1950    ///
1951    /// # Panics
1952    ///
1953    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1954    ///
1955    /// # Considerations
1956    ///
1957    /// This method is not magic; it is not provided by the hardware.
1958    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1959    /// and suffers from the same drawbacks.
1960    /// In particular, this method will not circumvent the [ABA Problem].
1961    ///
1962    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1963    ///
1964    /// # Examples
1965    ///
1966    /// ```
1967    /// use portable_atomic::{AtomicPtr, Ordering};
1968    ///
1969    /// let ptr: *mut _ = &mut 5;
1970    /// let some_ptr = AtomicPtr::new(ptr);
1971    ///
1972    /// let new: *mut _ = &mut 10;
1973    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1974    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1975    ///     if x == ptr {
1976    ///         Some(new)
1977    ///     } else {
1978    ///         None
1979    ///     }
1980    /// });
1981    /// assert_eq!(result, Ok(ptr));
1982    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1983    /// ```
1984    #[inline]
1985    #[cfg_attr(
1986        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1987        track_caller
1988    )]
1989    pub fn fetch_update<F>(
1990        &self,
1991        set_order: Ordering,
1992        fetch_order: Ordering,
1993        mut f: F,
1994    ) -> Result<*mut T, *mut T>
1995    where
1996        F: FnMut(*mut T) -> Option<*mut T>,
1997    {
1998        let mut prev = self.load(fetch_order);
1999        while let Some(next) = f(prev) {
2000            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2001                x @ Ok(_) => return x,
2002                Err(next_prev) => prev = next_prev,
2003            }
2004        }
2005        Err(prev)
2006    }
2007
2008    #[cfg(miri)]
2009    #[inline]
2010    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2011    fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
2012    where
2013        F: FnMut(*mut T) -> *mut T,
2014    {
2015        // This is a private function and all instances of `f` only operate on the value
2016        // loaded, so there is no need to synchronize the first load/failed CAS.
2017        let mut prev = self.load(Ordering::Relaxed);
2018        loop {
2019            let next = f(prev);
2020            match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
2021                Ok(x) => return x,
2022                Err(next_prev) => prev = next_prev,
2023            }
2024        }
2025    }
2026    } // cfg_has_atomic_cas!
2027
2028    /// Offsets the pointer's address by adding `val` (in units of `T`),
2029    /// returning the previous pointer.
2030    ///
2031    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2032    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2033    ///
2034    /// This method operates in units of `T`, which means that it cannot be used
2035    /// to offset the pointer by an amount which is not a multiple of
2036    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2037    /// work with a deliberately misaligned pointer. In such cases, you may use
2038    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2039    ///
2040    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2041    /// memory ordering of this operation. All ordering modes are possible. Note
2042    /// that using [`Acquire`] makes the store part of this operation
2043    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2044    ///
2045    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2046    ///
2047    /// # Examples
2048    ///
2049    /// ```
2050    /// # #![allow(unstable_name_collisions)]
2051    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2052    /// use portable_atomic::{AtomicPtr, Ordering};
2053    ///
2054    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2055    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2056    /// // Note: units of `size_of::<i64>()`.
2057    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2058    /// ```
2059    #[inline]
2060    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2061    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2062        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2063    }
2064
2065    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2066    /// returning the previous pointer.
2067    ///
2068    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2069    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2070    ///
2071    /// This method operates in units of `T`, which means that it cannot be used
2072    /// to offset the pointer by an amount which is not a multiple of
2073    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2074    /// work with a deliberately misaligned pointer. In such cases, you may use
2075    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2076    ///
2077    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2078    /// ordering of this operation. All ordering modes are possible. Note that
2079    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2080    /// and using [`Release`] makes the load part [`Relaxed`].
2081    ///
2082    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2083    ///
2084    /// # Examples
2085    ///
2086    /// ```
2087    /// use portable_atomic::{AtomicPtr, Ordering};
2088    ///
2089    /// let array = [1i32, 2i32];
2090    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2091    ///
2092    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2093    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2094    /// ```
2095    #[inline]
2096    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2097    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2098        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2099    }
2100
2101    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2102    /// previous pointer.
2103    ///
2104    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2105    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2106    ///
2107    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2108    /// memory ordering of this operation. All ordering modes are possible. Note
2109    /// that using [`Acquire`] makes the store part of this operation
2110    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2111    ///
2112    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2113    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2114    ///
2115    /// # Examples
2116    ///
2117    /// ```
2118    /// # #![allow(unstable_name_collisions)]
2119    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2120    /// use portable_atomic::{AtomicPtr, Ordering};
2121    ///
2122    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2123    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2124    /// // Note: in units of bytes, not `size_of::<i64>()`.
2125    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2126    /// ```
2127    #[inline]
2128    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2129    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2130        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2131        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2132        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2133        // compatible and is sound.
2134        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2135        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2136        #[cfg(miri)]
2137        {
2138            self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_add(val)))
2139        }
2140        #[cfg(not(miri))]
2141        {
2142            self.as_atomic_usize().fetch_add(val, order) as *mut T
2143        }
2144    }
2145
2146    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2147    /// previous pointer.
2148    ///
2149    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2150    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2151    ///
2152    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2153    /// memory ordering of this operation. All ordering modes are possible. Note
2154    /// that using [`Acquire`] makes the store part of this operation
2155    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2156    ///
2157    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2158    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2159    ///
2160    /// # Examples
2161    ///
2162    /// ```
2163    /// # #![allow(unstable_name_collisions)]
2164    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2165    /// use portable_atomic::{AtomicPtr, Ordering};
2166    ///
2167    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2168    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2169    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2170    /// ```
2171    #[inline]
2172    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2173    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2174        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2175        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2176        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2177        // compatible and is sound.
2178        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2179        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2180        #[cfg(miri)]
2181        {
2182            self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_sub(val)))
2183        }
2184        #[cfg(not(miri))]
2185        {
2186            self.as_atomic_usize().fetch_sub(val, order) as *mut T
2187        }
2188    }
2189
2190    /// Performs a bitwise "or" operation on the address of the current pointer,
2191    /// and the argument `val`, and stores a pointer with provenance of the
2192    /// current pointer and the resulting address.
2193    ///
2194    /// This is equivalent to using [`map_addr`] to atomically perform
2195    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2196    /// pointer schemes to atomically set tag bits.
2197    ///
2198    /// **Caveat**: This operation returns the previous value. To compute the
2199    /// stored value without losing provenance, you may use [`map_addr`]. For
2200    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2201    ///
2202    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2203    /// ordering of this operation. All ordering modes are possible. Note that
2204    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2205    /// and using [`Release`] makes the load part [`Relaxed`].
2206    ///
2207    /// This API and its claimed semantics are part of the Strict Provenance
2208    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2209    /// details.
2210    ///
2211    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2212    ///
2213    /// # Examples
2214    ///
2215    /// ```
2216    /// # #![allow(unstable_name_collisions)]
2217    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2218    /// use portable_atomic::{AtomicPtr, Ordering};
2219    ///
2220    /// let pointer = &mut 3i64 as *mut i64;
2221    ///
2222    /// let atom = AtomicPtr::<i64>::new(pointer);
2223    /// // Tag the bottom bit of the pointer.
2224    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2225    /// // Extract and untag.
2226    /// let tagged = atom.load(Ordering::Relaxed);
2227    /// assert_eq!(tagged.addr() & 1, 1);
2228    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2229    /// ```
2230    #[inline]
2231    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2232    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2233        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2234        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2235        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2236        // compatible and is sound.
2237        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2238        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2239        #[cfg(miri)]
2240        {
2241            self.fetch_update_(order, |x| strict::map_addr(x, |x| x | val))
2242        }
2243        #[cfg(not(miri))]
2244        {
2245            self.as_atomic_usize().fetch_or(val, order) as *mut T
2246        }
2247    }
2248
2249    /// Performs a bitwise "and" operation on the address of the current
2250    /// pointer, and the argument `val`, and stores a pointer with provenance of
2251    /// the current pointer and the resulting address.
2252    ///
2253    /// This is equivalent to using [`map_addr`] to atomically perform
2254    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2255    /// pointer schemes to atomically unset tag bits.
2256    ///
2257    /// **Caveat**: This operation returns the previous value. To compute the
2258    /// stored value without losing provenance, you may use [`map_addr`]. For
2259    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2260    ///
2261    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2262    /// ordering of this operation. All ordering modes are possible. Note that
2263    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2264    /// and using [`Release`] makes the load part [`Relaxed`].
2265    ///
2266    /// This API and its claimed semantics are part of the Strict Provenance
2267    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2268    /// details.
2269    ///
2270    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2271    ///
2272    /// # Examples
2273    ///
2274    /// ```
2275    /// # #![allow(unstable_name_collisions)]
2276    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2277    /// use portable_atomic::{AtomicPtr, Ordering};
2278    ///
2279    /// let pointer = &mut 3i64 as *mut i64;
2280    /// // A tagged pointer
2281    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2282    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2283    /// // Untag, and extract the previously tagged pointer.
2284    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2285    /// assert_eq!(untagged, pointer);
2286    /// ```
2287    #[inline]
2288    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2289    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2290        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2291        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2292        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2293        // compatible and is sound.
2294        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2295        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2296        #[cfg(miri)]
2297        {
2298            self.fetch_update_(order, |x| strict::map_addr(x, |x| x & val))
2299        }
2300        #[cfg(not(miri))]
2301        {
2302            self.as_atomic_usize().fetch_and(val, order) as *mut T
2303        }
2304    }
2305
2306    /// Performs a bitwise "xor" operation on the address of the current
2307    /// pointer, and the argument `val`, and stores a pointer with provenance of
2308    /// the current pointer and the resulting address.
2309    ///
2310    /// This is equivalent to using [`map_addr`] to atomically perform
2311    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2312    /// pointer schemes to atomically toggle tag bits.
2313    ///
2314    /// **Caveat**: This operation returns the previous value. To compute the
2315    /// stored value without losing provenance, you may use [`map_addr`]. For
2316    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2317    ///
2318    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2319    /// ordering of this operation. All ordering modes are possible. Note that
2320    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2321    /// and using [`Release`] makes the load part [`Relaxed`].
2322    ///
2323    /// This API and its claimed semantics are part of the Strict Provenance
2324    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2325    /// details.
2326    ///
2327    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2328    ///
2329    /// # Examples
2330    ///
2331    /// ```
2332    /// # #![allow(unstable_name_collisions)]
2333    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2334    /// use portable_atomic::{AtomicPtr, Ordering};
2335    ///
2336    /// let pointer = &mut 3i64 as *mut i64;
2337    /// let atom = AtomicPtr::<i64>::new(pointer);
2338    ///
2339    /// // Toggle a tag bit on the pointer.
2340    /// atom.fetch_xor(1, Ordering::Relaxed);
2341    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2342    /// ```
2343    #[inline]
2344    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2345    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2346        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2347        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2348        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2349        // compatible and is sound.
2350        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2351        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2352        #[cfg(miri)]
2353        {
2354            self.fetch_update_(order, |x| strict::map_addr(x, |x| x ^ val))
2355        }
2356        #[cfg(not(miri))]
2357        {
2358            self.as_atomic_usize().fetch_xor(val, order) as *mut T
2359        }
2360    }
2361
2362    /// Sets the bit at the specified bit-position to 1.
2363    ///
2364    /// Returns `true` if the specified bit was previously set to 1.
2365    ///
2366    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2367    /// of this operation. All ordering modes are possible. Note that using
2368    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2369    /// using [`Release`] makes the load part [`Relaxed`].
2370    ///
2371    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2372    ///
2373    /// # Examples
2374    ///
2375    /// ```
2376    /// # #![allow(unstable_name_collisions)]
2377    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2378    /// use portable_atomic::{AtomicPtr, Ordering};
2379    ///
2380    /// let pointer = &mut 3i64 as *mut i64;
2381    ///
2382    /// let atom = AtomicPtr::<i64>::new(pointer);
2383    /// // Tag the bottom bit of the pointer.
2384    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2385    /// // Extract and untag.
2386    /// let tagged = atom.load(Ordering::Relaxed);
2387    /// assert_eq!(tagged.addr() & 1, 1);
2388    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2389    /// ```
2390    #[inline]
2391    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2392    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2393        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2394        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2395        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2396        // compatible and is sound.
2397        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2398        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2399        #[cfg(miri)]
2400        {
2401            let mask = 1_usize.wrapping_shl(bit);
2402            self.fetch_or(mask, order) as usize & mask != 0
2403        }
2404        #[cfg(not(miri))]
2405        {
2406            self.as_atomic_usize().bit_set(bit, order)
2407        }
2408    }
2409
2410    /// Clears the bit at the specified bit-position to 1.
2411    ///
2412    /// Returns `true` if the specified bit was previously set to 1.
2413    ///
2414    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2415    /// of this operation. All ordering modes are possible. Note that using
2416    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2417    /// using [`Release`] makes the load part [`Relaxed`].
2418    ///
2419    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2420    ///
2421    /// # Examples
2422    ///
2423    /// ```
2424    /// # #![allow(unstable_name_collisions)]
2425    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2426    /// use portable_atomic::{AtomicPtr, Ordering};
2427    ///
2428    /// let pointer = &mut 3i64 as *mut i64;
2429    /// // A tagged pointer
2430    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2431    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2432    /// // Untag
2433    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2434    /// ```
2435    #[inline]
2436    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2437    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2438        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2439        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2440        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2441        // compatible and is sound.
2442        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2443        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2444        #[cfg(miri)]
2445        {
2446            let mask = 1_usize.wrapping_shl(bit);
2447            self.fetch_and(!mask, order) as usize & mask != 0
2448        }
2449        #[cfg(not(miri))]
2450        {
2451            self.as_atomic_usize().bit_clear(bit, order)
2452        }
2453    }
2454
2455    /// Toggles the bit at the specified bit-position.
2456    ///
2457    /// Returns `true` if the specified bit was previously set to 1.
2458    ///
2459    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2460    /// of this operation. All ordering modes are possible. Note that using
2461    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2462    /// using [`Release`] makes the load part [`Relaxed`].
2463    ///
2464    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2465    ///
2466    /// # Examples
2467    ///
2468    /// ```
2469    /// # #![allow(unstable_name_collisions)]
2470    /// # #[allow(unused_imports)] use sptr::Strict; // strict provenance polyfill for old rustc
2471    /// use portable_atomic::{AtomicPtr, Ordering};
2472    ///
2473    /// let pointer = &mut 3i64 as *mut i64;
2474    /// let atom = AtomicPtr::<i64>::new(pointer);
2475    ///
2476    /// // Toggle a tag bit on the pointer.
2477    /// atom.bit_toggle(0, Ordering::Relaxed);
2478    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2479    /// ```
2480    #[inline]
2481    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2482    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2483        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2484        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2485        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2486        // compatible and is sound.
2487        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2488        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2489        #[cfg(miri)]
2490        {
2491            let mask = 1_usize.wrapping_shl(bit);
2492            self.fetch_xor(mask, order) as usize & mask != 0
2493        }
2494        #[cfg(not(miri))]
2495        {
2496            self.as_atomic_usize().bit_toggle(bit, order)
2497        }
2498    }
2499
2500    #[cfg(not(miri))]
2501    #[inline(always)]
2502    fn as_atomic_usize(&self) -> &AtomicUsize {
2503        static_assert!(
2504            core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2505        );
2506        static_assert!(
2507            core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2508        );
2509        // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2510        // and both access data in the same way.
2511        unsafe { &*(self as *const Self as *const AtomicUsize) }
2512    }
2513    } // cfg_has_atomic_cas_or_amo32!
2514
2515    const_fn! {
2516        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2517        /// Returns a mutable pointer to the underlying pointer.
2518        ///
2519        /// Returning an `*mut` pointer from a shared reference to this atomic is
2520        /// safe because the atomic types work with interior mutability. Any use of
2521        /// the returned raw pointer requires an `unsafe` block and has to uphold
2522        /// the safety requirements. If there is concurrent access, note the following
2523        /// additional safety requirements:
2524        ///
2525        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2526        ///   operations on it must be atomic.
2527        /// - Otherwise, any concurrent operations on it must be compatible with
2528        ///   operations performed by this atomic type.
2529        ///
2530        /// This is `const fn` on Rust 1.58+.
2531        #[inline]
2532        pub const fn as_ptr(&self) -> *mut *mut T {
2533            self.inner.as_ptr()
2534        }
2535    }
2536}
2537// See https://github.com/taiki-e/portable-atomic/issues/180
2538#[cfg(not(feature = "require-cas"))]
2539cfg_no_atomic_cas! {
2540#[doc(hidden)]
2541#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2542impl<'a, T: 'a> AtomicPtr<T> {
2543    cfg_no_atomic_cas_or_amo32! {
2544    #[inline]
2545    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2546    where
2547        &'a Self: HasSwap,
2548    {
2549        unimplemented!()
2550    }
2551    } // cfg_no_atomic_cas_or_amo32!
2552    #[inline]
2553    pub fn compare_exchange(
2554        &self,
2555        current: *mut T,
2556        new: *mut T,
2557        success: Ordering,
2558        failure: Ordering,
2559    ) -> Result<*mut T, *mut T>
2560    where
2561        &'a Self: HasCompareExchange,
2562    {
2563        unimplemented!()
2564    }
2565    #[inline]
2566    pub fn compare_exchange_weak(
2567        &self,
2568        current: *mut T,
2569        new: *mut T,
2570        success: Ordering,
2571        failure: Ordering,
2572    ) -> Result<*mut T, *mut T>
2573    where
2574        &'a Self: HasCompareExchangeWeak,
2575    {
2576        unimplemented!()
2577    }
2578    #[inline]
2579    pub fn fetch_update<F>(
2580        &self,
2581        set_order: Ordering,
2582        fetch_order: Ordering,
2583        f: F,
2584    ) -> Result<*mut T, *mut T>
2585    where
2586        F: FnMut(*mut T) -> Option<*mut T>,
2587        &'a Self: HasFetchUpdate,
2588    {
2589        unimplemented!()
2590    }
2591    cfg_no_atomic_cas_or_amo32! {
2592    #[inline]
2593    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2594    where
2595        &'a Self: HasFetchPtrAdd,
2596    {
2597        unimplemented!()
2598    }
2599    #[inline]
2600    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2601    where
2602        &'a Self: HasFetchPtrSub,
2603    {
2604        unimplemented!()
2605    }
2606    #[inline]
2607    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2608    where
2609        &'a Self: HasFetchByteAdd,
2610    {
2611        unimplemented!()
2612    }
2613    #[inline]
2614    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2615    where
2616        &'a Self: HasFetchByteSub,
2617    {
2618        unimplemented!()
2619    }
2620    #[inline]
2621    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2622    where
2623        &'a Self: HasFetchOr,
2624    {
2625        unimplemented!()
2626    }
2627    #[inline]
2628    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2629    where
2630        &'a Self: HasFetchAnd,
2631    {
2632        unimplemented!()
2633    }
2634    #[inline]
2635    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2636    where
2637        &'a Self: HasFetchXor,
2638    {
2639        unimplemented!()
2640    }
2641    #[inline]
2642    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2643    where
2644        &'a Self: HasBitSet,
2645    {
2646        unimplemented!()
2647    }
2648    #[inline]
2649    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2650    where
2651        &'a Self: HasBitClear,
2652    {
2653        unimplemented!()
2654    }
2655    #[inline]
2656    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2657    where
2658        &'a Self: HasBitToggle,
2659    {
2660        unimplemented!()
2661    }
2662    } // cfg_no_atomic_cas_or_amo32!
2663}
2664} // cfg_no_atomic_cas!
2665} // cfg_has_atomic_ptr!
2666
2667macro_rules! atomic_int {
2668    // Atomic{I,U}* impls
2669    ($atomic_type:ident, $int_type:ident, $align:literal,
2670        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2671        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2672    ) => {
2673        doc_comment! {
2674            concat!("An integer type which can be safely shared between threads.
2675
2676This type has the same in-memory representation as the underlying integer type,
2677[`", stringify!($int_type), "`].
2678
2679If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2680"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2681"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2682inline assembly. Otherwise synchronizes using global locks.
2683You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2684atomic instructions or locks will be used.
2685"
2686            ),
2687            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2688            // will show clearer docs.
2689            #[repr(C, align($align))]
2690            pub struct $atomic_type {
2691                inner: imp::$atomic_type,
2692            }
2693        }
2694
2695        impl Default for $atomic_type {
2696            #[inline]
2697            fn default() -> Self {
2698                Self::new($int_type::default())
2699            }
2700        }
2701
2702        impl From<$int_type> for $atomic_type {
2703            #[inline]
2704            fn from(v: $int_type) -> Self {
2705                Self::new(v)
2706            }
2707        }
2708
2709        // UnwindSafe is implicitly implemented.
2710        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2711        impl core::panic::RefUnwindSafe for $atomic_type {}
2712        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2713        impl std::panic::RefUnwindSafe for $atomic_type {}
2714
2715        impl_debug_and_serde!($atomic_type);
2716
2717        impl $atomic_type {
2718            doc_comment! {
2719                concat!(
2720                    "Creates a new atomic integer.
2721
2722# Examples
2723
2724```
2725use portable_atomic::", stringify!($atomic_type), ";
2726
2727let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2728```"
2729                ),
2730                #[inline]
2731                #[must_use]
2732                pub const fn new(v: $int_type) -> Self {
2733                    static_assert_layout!($atomic_type, $int_type);
2734                    Self { inner: imp::$atomic_type::new(v) }
2735                }
2736            }
2737
2738            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2739            #[cfg(not(portable_atomic_no_const_mut_refs))]
2740            doc_comment! {
2741                concat!("Creates a new reference to an atomic integer from a pointer.
2742
2743This is `const fn` on Rust 1.83+.
2744
2745# Safety
2746
2747* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2748  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2749* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2750* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2751  behind `ptr` must have a happens-before relationship with atomic accesses via
2752  the returned value (or vice-versa).
2753  * In other words, time periods where the value is accessed atomically may not
2754    overlap with periods where the value is accessed non-atomically.
2755  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2756    for the duration of lifetime `'a`. Most use cases should be able to follow
2757    this guideline.
2758  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2759    done from the same thread.
2760* If this atomic type is *not* lock-free:
2761  * Any accesses to the value behind `ptr` must have a happens-before relationship
2762    with accesses via the returned value (or vice-versa).
2763  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2764    be compatible with operations performed by this atomic type.
2765* This method must not be used to create overlapping or mixed-size atomic
2766  accesses, as these are not supported by the memory model.
2767
2768[valid]: core::ptr#safety"),
2769                #[inline]
2770                #[must_use]
2771                pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2772                    #[allow(clippy::cast_ptr_alignment)]
2773                    // SAFETY: guaranteed by the caller
2774                    unsafe { &*(ptr as *mut Self) }
2775                }
2776            }
2777            #[cfg(portable_atomic_no_const_mut_refs)]
2778            doc_comment! {
2779                concat!("Creates a new reference to an atomic integer from a pointer.
2780
2781This is `const fn` on Rust 1.83+.
2782
2783# Safety
2784
2785* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2786  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2787* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2788* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2789  behind `ptr` must have a happens-before relationship with atomic accesses via
2790  the returned value (or vice-versa).
2791  * In other words, time periods where the value is accessed atomically may not
2792    overlap with periods where the value is accessed non-atomically.
2793  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2794    for the duration of lifetime `'a`. Most use cases should be able to follow
2795    this guideline.
2796  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2797    done from the same thread.
2798* If this atomic type is *not* lock-free:
2799  * Any accesses to the value behind `ptr` must have a happens-before relationship
2800    with accesses via the returned value (or vice-versa).
2801  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2802    be compatible with operations performed by this atomic type.
2803* This method must not be used to create overlapping or mixed-size atomic
2804  accesses, as these are not supported by the memory model.
2805
2806[valid]: core::ptr#safety"),
2807                #[inline]
2808                #[must_use]
2809                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2810                    #[allow(clippy::cast_ptr_alignment)]
2811                    // SAFETY: guaranteed by the caller
2812                    unsafe { &*(ptr as *mut Self) }
2813                }
2814            }
2815
2816            doc_comment! {
2817                concat!("Returns `true` if operations on values of this type are lock-free.
2818
2819If the compiler or the platform doesn't support the necessary
2820atomic instructions, global locks for every potentially
2821concurrent atomic operation will be used.
2822
2823# Examples
2824
2825```
2826use portable_atomic::", stringify!($atomic_type), ";
2827
2828let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2829```"),
2830                #[inline]
2831                #[must_use]
2832                pub fn is_lock_free() -> bool {
2833                    <imp::$atomic_type>::is_lock_free()
2834                }
2835            }
2836
2837            doc_comment! {
2838                concat!("Returns `true` if operations on values of this type are lock-free.
2839
2840If the compiler or the platform doesn't support the necessary
2841atomic instructions, global locks for every potentially
2842concurrent atomic operation will be used.
2843
2844**Note:** If the atomic operation relies on dynamic CPU feature detection,
2845this type may be lock-free even if the function returns false.
2846
2847# Examples
2848
2849```
2850use portable_atomic::", stringify!($atomic_type), ";
2851
2852const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2853```"),
2854                #[inline]
2855                #[must_use]
2856                pub const fn is_always_lock_free() -> bool {
2857                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2858                }
2859            }
2860            #[cfg(test)]
2861            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2862
2863            #[cfg(not(portable_atomic_no_const_mut_refs))]
2864            doc_comment! {
2865                concat!("Returns a mutable reference to the underlying integer.\n
2866This is safe because the mutable reference guarantees that no other threads are
2867concurrently accessing the atomic data.
2868
2869This is `const fn` on Rust 1.83+.
2870
2871# Examples
2872
2873```
2874use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2875
2876let mut some_var = ", stringify!($atomic_type), "::new(10);
2877assert_eq!(*some_var.get_mut(), 10);
2878*some_var.get_mut() = 5;
2879assert_eq!(some_var.load(Ordering::SeqCst), 5);
2880```"),
2881                #[inline]
2882                pub const fn get_mut(&mut self) -> &mut $int_type {
2883                    // SAFETY: the mutable reference guarantees unique ownership.
2884                    // (core::sync::atomic::Atomic*::get_mut is not const yet)
2885                    unsafe { &mut *self.as_ptr() }
2886                }
2887            }
2888            #[cfg(portable_atomic_no_const_mut_refs)]
2889            doc_comment! {
2890                concat!("Returns a mutable reference to the underlying integer.\n
2891This is safe because the mutable reference guarantees that no other threads are
2892concurrently accessing the atomic data.
2893
2894This is `const fn` on Rust 1.83+.
2895
2896# Examples
2897
2898```
2899use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2900
2901let mut some_var = ", stringify!($atomic_type), "::new(10);
2902assert_eq!(*some_var.get_mut(), 10);
2903*some_var.get_mut() = 5;
2904assert_eq!(some_var.load(Ordering::SeqCst), 5);
2905```"),
2906                #[inline]
2907                pub fn get_mut(&mut self) -> &mut $int_type {
2908                    // SAFETY: the mutable reference guarantees unique ownership.
2909                    unsafe { &mut *self.as_ptr() }
2910                }
2911            }
2912
2913            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2914            // https://github.com/rust-lang/rust/issues/76314
2915
2916            #[cfg(not(portable_atomic_no_const_transmute))]
2917            doc_comment! {
2918                concat!("Consumes the atomic and returns the contained value.
2919
2920This is safe because passing `self` by value guarantees that no other threads are
2921concurrently accessing the atomic data.
2922
2923This is `const fn` on Rust 1.56+.
2924
2925# Examples
2926
2927```
2928use portable_atomic::", stringify!($atomic_type), ";
2929
2930let some_var = ", stringify!($atomic_type), "::new(5);
2931assert_eq!(some_var.into_inner(), 5);
2932```"),
2933                #[inline]
2934                pub const fn into_inner(self) -> $int_type {
2935                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2936                    // so they can be safely transmuted.
2937                    // (const UnsafeCell::into_inner is unstable)
2938                    unsafe { core::mem::transmute(self) }
2939                }
2940            }
2941            #[cfg(portable_atomic_no_const_transmute)]
2942            doc_comment! {
2943                concat!("Consumes the atomic and returns the contained value.
2944
2945This is safe because passing `self` by value guarantees that no other threads are
2946concurrently accessing the atomic data.
2947
2948This is `const fn` on Rust 1.56+.
2949
2950# Examples
2951
2952```
2953use portable_atomic::", stringify!($atomic_type), ";
2954
2955let some_var = ", stringify!($atomic_type), "::new(5);
2956assert_eq!(some_var.into_inner(), 5);
2957```"),
2958                #[inline]
2959                pub fn into_inner(self) -> $int_type {
2960                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2961                    // so they can be safely transmuted.
2962                    // (const UnsafeCell::into_inner is unstable)
2963                    unsafe { core::mem::transmute(self) }
2964                }
2965            }
2966
2967            doc_comment! {
2968                concat!("Loads a value from the atomic integer.
2969
2970`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2971Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2972
2973# Panics
2974
2975Panics if `order` is [`Release`] or [`AcqRel`].
2976
2977# Examples
2978
2979```
2980use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2981
2982let some_var = ", stringify!($atomic_type), "::new(5);
2983
2984assert_eq!(some_var.load(Ordering::Relaxed), 5);
2985```"),
2986                #[inline]
2987                #[cfg_attr(
2988                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2989                    track_caller
2990                )]
2991                pub fn load(&self, order: Ordering) -> $int_type {
2992                    self.inner.load(order)
2993                }
2994            }
2995
2996            doc_comment! {
2997                concat!("Stores a value into the atomic integer.
2998
2999`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3000Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3001
3002# Panics
3003
3004Panics if `order` is [`Acquire`] or [`AcqRel`].
3005
3006# Examples
3007
3008```
3009use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3010
3011let some_var = ", stringify!($atomic_type), "::new(5);
3012
3013some_var.store(10, Ordering::Relaxed);
3014assert_eq!(some_var.load(Ordering::Relaxed), 10);
3015```"),
3016                #[inline]
3017                #[cfg_attr(
3018                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3019                    track_caller
3020                )]
3021                pub fn store(&self, val: $int_type, order: Ordering) {
3022                    self.inner.store(val, order)
3023                }
3024            }
3025
3026            cfg_has_atomic_cas_or_amo32! {
3027            $cfg_has_atomic_cas_or_amo32_or_8! {
3028            doc_comment! {
3029                concat!("Stores a value into the atomic integer, returning the previous value.
3030
3031`swap` takes an [`Ordering`] argument which describes the memory ordering
3032of this operation. All ordering modes are possible. Note that using
3033[`Acquire`] makes the store part of this operation [`Relaxed`], and
3034using [`Release`] makes the load part [`Relaxed`].
3035
3036# Examples
3037
3038```
3039use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3040
3041let some_var = ", stringify!($atomic_type), "::new(5);
3042
3043assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3044```"),
3045                #[inline]
3046                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3047                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3048                    self.inner.swap(val, order)
3049                }
3050            }
3051            } // $cfg_has_atomic_cas_or_amo32_or_8!
3052
3053            cfg_has_atomic_cas! {
3054            doc_comment! {
3055                concat!("Stores a value into the atomic integer if the current value is the same as
3056the `current` value.
3057
3058The return value is a result indicating whether the new value was written and
3059containing the previous value. On success this value is guaranteed to be equal to
3060`current`.
3061
3062`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3063ordering of this operation. `success` describes the required ordering for the
3064read-modify-write operation that takes place if the comparison with `current` succeeds.
3065`failure` describes the required ordering for the load operation that takes place when
3066the comparison fails. Using [`Acquire`] as success ordering makes the store part
3067of this operation [`Relaxed`], and using [`Release`] makes the successful load
3068[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3069
3070# Panics
3071
3072Panics if `failure` is [`Release`], [`AcqRel`].
3073
3074# Examples
3075
3076```
3077use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3078
3079let some_var = ", stringify!($atomic_type), "::new(5);
3080
3081assert_eq!(
3082    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3083    Ok(5),
3084);
3085assert_eq!(some_var.load(Ordering::Relaxed), 10);
3086
3087assert_eq!(
3088    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3089    Err(10),
3090);
3091assert_eq!(some_var.load(Ordering::Relaxed), 10);
3092```"),
3093                #[inline]
3094                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3095                #[cfg_attr(
3096                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3097                    track_caller
3098                )]
3099                pub fn compare_exchange(
3100                    &self,
3101                    current: $int_type,
3102                    new: $int_type,
3103                    success: Ordering,
3104                    failure: Ordering,
3105                ) -> Result<$int_type, $int_type> {
3106                    self.inner.compare_exchange(current, new, success, failure)
3107                }
3108            }
3109
3110            doc_comment! {
3111                concat!("Stores a value into the atomic integer if the current value is the same as
3112the `current` value.
3113Unlike [`compare_exchange`](Self::compare_exchange)
3114this function is allowed to spuriously fail even
3115when the comparison succeeds, which can result in more efficient code on some
3116platforms. The return value is a result indicating whether the new value was
3117written and containing the previous value.
3118
3119`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3120ordering of this operation. `success` describes the required ordering for the
3121read-modify-write operation that takes place if the comparison with `current` succeeds.
3122`failure` describes the required ordering for the load operation that takes place when
3123the comparison fails. Using [`Acquire`] as success ordering makes the store part
3124of this operation [`Relaxed`], and using [`Release`] makes the successful load
3125[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3126
3127# Panics
3128
3129Panics if `failure` is [`Release`], [`AcqRel`].
3130
3131# Examples
3132
3133```
3134use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3135
3136let val = ", stringify!($atomic_type), "::new(4);
3137
3138let mut old = val.load(Ordering::Relaxed);
3139loop {
3140    let new = old * 2;
3141    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3142        Ok(_) => break,
3143        Err(x) => old = x,
3144    }
3145}
3146```"),
3147                #[inline]
3148                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3149                #[cfg_attr(
3150                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3151                    track_caller
3152                )]
3153                pub fn compare_exchange_weak(
3154                    &self,
3155                    current: $int_type,
3156                    new: $int_type,
3157                    success: Ordering,
3158                    failure: Ordering,
3159                ) -> Result<$int_type, $int_type> {
3160                    self.inner.compare_exchange_weak(current, new, success, failure)
3161                }
3162            }
3163            } // cfg_has_atomic_cas!
3164
3165            $cfg_has_atomic_cas_or_amo32_or_8! {
3166            doc_comment! {
3167                concat!("Adds to the current value, returning the previous value.
3168
3169This operation wraps around on overflow.
3170
3171`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3172of this operation. All ordering modes are possible. Note that using
3173[`Acquire`] makes the store part of this operation [`Relaxed`], and
3174using [`Release`] makes the load part [`Relaxed`].
3175
3176# Examples
3177
3178```
3179use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3180
3181let foo = ", stringify!($atomic_type), "::new(0);
3182assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3183assert_eq!(foo.load(Ordering::SeqCst), 10);
3184```"),
3185                #[inline]
3186                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3187                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3188                    self.inner.fetch_add(val, order)
3189                }
3190            }
3191
3192            doc_comment! {
3193                concat!("Adds to the current value.
3194
3195This operation wraps around on overflow.
3196
3197Unlike `fetch_add`, this does not return the previous value.
3198
3199`add` takes an [`Ordering`] argument which describes the memory ordering
3200of this operation. All ordering modes are possible. Note that using
3201[`Acquire`] makes the store part of this operation [`Relaxed`], and
3202using [`Release`] makes the load part [`Relaxed`].
3203
3204This function may generate more efficient code than `fetch_add` on some platforms.
3205
3206- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3207
3208# Examples
3209
3210```
3211use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3212
3213let foo = ", stringify!($atomic_type), "::new(0);
3214foo.add(10, Ordering::SeqCst);
3215assert_eq!(foo.load(Ordering::SeqCst), 10);
3216```"),
3217                #[inline]
3218                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3219                pub fn add(&self, val: $int_type, order: Ordering) {
3220                    self.inner.add(val, order);
3221                }
3222            }
3223
3224            doc_comment! {
3225                concat!("Subtracts from the current value, returning the previous value.
3226
3227This operation wraps around on overflow.
3228
3229`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3230of this operation. All ordering modes are possible. Note that using
3231[`Acquire`] makes the store part of this operation [`Relaxed`], and
3232using [`Release`] makes the load part [`Relaxed`].
3233
3234# Examples
3235
3236```
3237use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3238
3239let foo = ", stringify!($atomic_type), "::new(20);
3240assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3241assert_eq!(foo.load(Ordering::SeqCst), 10);
3242```"),
3243                #[inline]
3244                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3245                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3246                    self.inner.fetch_sub(val, order)
3247                }
3248            }
3249
3250            doc_comment! {
3251                concat!("Subtracts from the current value.
3252
3253This operation wraps around on overflow.
3254
3255Unlike `fetch_sub`, this does not return the previous value.
3256
3257`sub` takes an [`Ordering`] argument which describes the memory ordering
3258of this operation. All ordering modes are possible. Note that using
3259[`Acquire`] makes the store part of this operation [`Relaxed`], and
3260using [`Release`] makes the load part [`Relaxed`].
3261
3262This function may generate more efficient code than `fetch_sub` on some platforms.
3263
3264- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3265
3266# Examples
3267
3268```
3269use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3270
3271let foo = ", stringify!($atomic_type), "::new(20);
3272foo.sub(10, Ordering::SeqCst);
3273assert_eq!(foo.load(Ordering::SeqCst), 10);
3274```"),
3275                #[inline]
3276                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3277                pub fn sub(&self, val: $int_type, order: Ordering) {
3278                    self.inner.sub(val, order);
3279                }
3280            }
3281            } // $cfg_has_atomic_cas_or_amo32_or_8!
3282
3283            doc_comment! {
3284                concat!("Bitwise \"and\" with the current value.
3285
3286Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3287sets the new value to the result.
3288
3289Returns the previous value.
3290
3291`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3292of this operation. All ordering modes are possible. Note that using
3293[`Acquire`] makes the store part of this operation [`Relaxed`], and
3294using [`Release`] makes the load part [`Relaxed`].
3295
3296# Examples
3297
3298```
3299use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3300
3301let foo = ", stringify!($atomic_type), "::new(0b101101);
3302assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3303assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3304```"),
3305                #[inline]
3306                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3307                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3308                    self.inner.fetch_and(val, order)
3309                }
3310            }
3311
3312            doc_comment! {
3313                concat!("Bitwise \"and\" with the current value.
3314
3315Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3316sets the new value to the result.
3317
3318Unlike `fetch_and`, this does not return the previous value.
3319
3320`and` takes an [`Ordering`] argument which describes the memory ordering
3321of this operation. All ordering modes are possible. Note that using
3322[`Acquire`] makes the store part of this operation [`Relaxed`], and
3323using [`Release`] makes the load part [`Relaxed`].
3324
3325This function may generate more efficient code than `fetch_and` on some platforms.
3326
3327- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3328- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3329
3330Note: On x86/x86_64, the use of either function should not usually
3331affect the generated code, because LLVM can properly optimize the case
3332where the result is unused.
3333
3334# Examples
3335
3336```
3337use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3338
3339let foo = ", stringify!($atomic_type), "::new(0b101101);
3340assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3341assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3342```"),
3343                #[inline]
3344                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3345                pub fn and(&self, val: $int_type, order: Ordering) {
3346                    self.inner.and(val, order);
3347                }
3348            }
3349
3350            cfg_has_atomic_cas! {
3351            doc_comment! {
3352                concat!("Bitwise \"nand\" with the current value.
3353
3354Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3355sets the new value to the result.
3356
3357Returns the previous value.
3358
3359`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3360of this operation. All ordering modes are possible. Note that using
3361[`Acquire`] makes the store part of this operation [`Relaxed`], and
3362using [`Release`] makes the load part [`Relaxed`].
3363
3364# Examples
3365
3366```
3367use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3368
3369let foo = ", stringify!($atomic_type), "::new(0x13);
3370assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3371assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3372```"),
3373                #[inline]
3374                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3375                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3376                    self.inner.fetch_nand(val, order)
3377                }
3378            }
3379            } // cfg_has_atomic_cas!
3380
3381            doc_comment! {
3382                concat!("Bitwise \"or\" with the current value.
3383
3384Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3385sets the new value to the result.
3386
3387Returns the previous value.
3388
3389`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3390of this operation. All ordering modes are possible. Note that using
3391[`Acquire`] makes the store part of this operation [`Relaxed`], and
3392using [`Release`] makes the load part [`Relaxed`].
3393
3394# Examples
3395
3396```
3397use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3398
3399let foo = ", stringify!($atomic_type), "::new(0b101101);
3400assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3401assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3402```"),
3403                #[inline]
3404                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3405                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3406                    self.inner.fetch_or(val, order)
3407                }
3408            }
3409
3410            doc_comment! {
3411                concat!("Bitwise \"or\" with the current value.
3412
3413Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3414sets the new value to the result.
3415
3416Unlike `fetch_or`, this does not return the previous value.
3417
3418`or` takes an [`Ordering`] argument which describes the memory ordering
3419of this operation. All ordering modes are possible. Note that using
3420[`Acquire`] makes the store part of this operation [`Relaxed`], and
3421using [`Release`] makes the load part [`Relaxed`].
3422
3423This function may generate more efficient code than `fetch_or` on some platforms.
3424
3425- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3426- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3427
3428Note: On x86/x86_64, the use of either function should not usually
3429affect the generated code, because LLVM can properly optimize the case
3430where the result is unused.
3431
3432# Examples
3433
3434```
3435use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3436
3437let foo = ", stringify!($atomic_type), "::new(0b101101);
3438assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3439assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3440```"),
3441                #[inline]
3442                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3443                pub fn or(&self, val: $int_type, order: Ordering) {
3444                    self.inner.or(val, order);
3445                }
3446            }
3447
3448            doc_comment! {
3449                concat!("Bitwise \"xor\" with the current value.
3450
3451Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3452sets the new value to the result.
3453
3454Returns the previous value.
3455
3456`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3457of this operation. All ordering modes are possible. Note that using
3458[`Acquire`] makes the store part of this operation [`Relaxed`], and
3459using [`Release`] makes the load part [`Relaxed`].
3460
3461# Examples
3462
3463```
3464use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3465
3466let foo = ", stringify!($atomic_type), "::new(0b101101);
3467assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3468assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3469```"),
3470                #[inline]
3471                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3472                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3473                    self.inner.fetch_xor(val, order)
3474                }
3475            }
3476
3477            doc_comment! {
3478                concat!("Bitwise \"xor\" with the current value.
3479
3480Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3481sets the new value to the result.
3482
3483Unlike `fetch_xor`, this does not return the previous value.
3484
3485`xor` takes an [`Ordering`] argument which describes the memory ordering
3486of this operation. All ordering modes are possible. Note that using
3487[`Acquire`] makes the store part of this operation [`Relaxed`], and
3488using [`Release`] makes the load part [`Relaxed`].
3489
3490This function may generate more efficient code than `fetch_xor` on some platforms.
3491
3492- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3493- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3494
3495Note: On x86/x86_64, the use of either function should not usually
3496affect the generated code, because LLVM can properly optimize the case
3497where the result is unused.
3498
3499# Examples
3500
3501```
3502use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3503
3504let foo = ", stringify!($atomic_type), "::new(0b101101);
3505foo.xor(0b110011, Ordering::SeqCst);
3506assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3507```"),
3508                #[inline]
3509                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3510                pub fn xor(&self, val: $int_type, order: Ordering) {
3511                    self.inner.xor(val, order);
3512                }
3513            }
3514
3515            cfg_has_atomic_cas! {
3516            doc_comment! {
3517                concat!("Fetches the value, and applies a function to it that returns an optional
3518new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3519`Err(previous_value)`.
3520
3521Note: This may call the function multiple times if the value has been changed from other threads in
3522the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3523only once to the stored value.
3524
3525`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3526The first describes the required ordering for when the operation finally succeeds while the second
3527describes the required ordering for loads. These correspond to the success and failure orderings of
3528[`compare_exchange`](Self::compare_exchange) respectively.
3529
3530Using [`Acquire`] as success ordering makes the store part
3531of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3532[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3533
3534# Panics
3535
3536Panics if `fetch_order` is [`Release`], [`AcqRel`].
3537
3538# Considerations
3539
3540This method is not magic; it is not provided by the hardware.
3541It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3542and suffers from the same drawbacks.
3543In particular, this method will not circumvent the [ABA Problem].
3544
3545[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3546
3547# Examples
3548
3549```
3550use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3551
3552let x = ", stringify!($atomic_type), "::new(7);
3553assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3554assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3555assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3556assert_eq!(x.load(Ordering::SeqCst), 9);
3557```"),
3558                #[inline]
3559                #[cfg_attr(
3560                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3561                    track_caller
3562                )]
3563                pub fn fetch_update<F>(
3564                    &self,
3565                    set_order: Ordering,
3566                    fetch_order: Ordering,
3567                    mut f: F,
3568                ) -> Result<$int_type, $int_type>
3569                where
3570                    F: FnMut($int_type) -> Option<$int_type>,
3571                {
3572                    let mut prev = self.load(fetch_order);
3573                    while let Some(next) = f(prev) {
3574                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3575                            x @ Ok(_) => return x,
3576                            Err(next_prev) => prev = next_prev,
3577                        }
3578                    }
3579                    Err(prev)
3580                }
3581            }
3582            } // cfg_has_atomic_cas!
3583
3584            $cfg_has_atomic_cas_or_amo32_or_8! {
3585            doc_comment! {
3586                concat!("Maximum with the current value.
3587
3588Finds the maximum of the current value and the argument `val`, and
3589sets the new value to the result.
3590
3591Returns the previous value.
3592
3593`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3594of this operation. All ordering modes are possible. Note that using
3595[`Acquire`] makes the store part of this operation [`Relaxed`], and
3596using [`Release`] makes the load part [`Relaxed`].
3597
3598# Examples
3599
3600```
3601use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3602
3603let foo = ", stringify!($atomic_type), "::new(23);
3604assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3605assert_eq!(foo.load(Ordering::SeqCst), 42);
3606```
3607
3608If you want to obtain the maximum value in one step, you can use the following:
3609
3610```
3611use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3612
3613let foo = ", stringify!($atomic_type), "::new(23);
3614let bar = 42;
3615let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3616assert!(max_foo == 42);
3617```"),
3618                #[inline]
3619                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3620                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3621                    self.inner.fetch_max(val, order)
3622                }
3623            }
3624
3625            doc_comment! {
3626                concat!("Minimum with the current value.
3627
3628Finds the minimum of the current value and the argument `val`, and
3629sets the new value to the result.
3630
3631Returns the previous value.
3632
3633`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3634of this operation. All ordering modes are possible. Note that using
3635[`Acquire`] makes the store part of this operation [`Relaxed`], and
3636using [`Release`] makes the load part [`Relaxed`].
3637
3638# Examples
3639
3640```
3641use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3642
3643let foo = ", stringify!($atomic_type), "::new(23);
3644assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3645assert_eq!(foo.load(Ordering::Relaxed), 23);
3646assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3647assert_eq!(foo.load(Ordering::Relaxed), 22);
3648```
3649
3650If you want to obtain the minimum value in one step, you can use the following:
3651
3652```
3653use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3654
3655let foo = ", stringify!($atomic_type), "::new(23);
3656let bar = 12;
3657let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3658assert_eq!(min_foo, 12);
3659```"),
3660                #[inline]
3661                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3662                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3663                    self.inner.fetch_min(val, order)
3664                }
3665            }
3666            } // $cfg_has_atomic_cas_or_amo32_or_8!
3667
3668            doc_comment! {
3669                concat!("Sets the bit at the specified bit-position to 1.
3670
3671Returns `true` if the specified bit was previously set to 1.
3672
3673`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3674of this operation. All ordering modes are possible. Note that using
3675[`Acquire`] makes the store part of this operation [`Relaxed`], and
3676using [`Release`] makes the load part [`Relaxed`].
3677
3678This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3679
3680# Examples
3681
3682```
3683use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3684
3685let foo = ", stringify!($atomic_type), "::new(0b0000);
3686assert!(!foo.bit_set(0, Ordering::Relaxed));
3687assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3688assert!(foo.bit_set(0, Ordering::Relaxed));
3689assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3690```"),
3691                #[inline]
3692                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3693                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3694                    self.inner.bit_set(bit, order)
3695                }
3696            }
3697
3698            doc_comment! {
3699                concat!("Clears the bit at the specified bit-position to 1.
3700
3701Returns `true` if the specified bit was previously set to 1.
3702
3703`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3704of this operation. All ordering modes are possible. Note that using
3705[`Acquire`] makes the store part of this operation [`Relaxed`], and
3706using [`Release`] makes the load part [`Relaxed`].
3707
3708This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3709
3710# Examples
3711
3712```
3713use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3714
3715let foo = ", stringify!($atomic_type), "::new(0b0001);
3716assert!(foo.bit_clear(0, Ordering::Relaxed));
3717assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3718```"),
3719                #[inline]
3720                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3721                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3722                    self.inner.bit_clear(bit, order)
3723                }
3724            }
3725
3726            doc_comment! {
3727                concat!("Toggles the bit at the specified bit-position.
3728
3729Returns `true` if the specified bit was previously set to 1.
3730
3731`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3732of this operation. All ordering modes are possible. Note that using
3733[`Acquire`] makes the store part of this operation [`Relaxed`], and
3734using [`Release`] makes the load part [`Relaxed`].
3735
3736This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3737
3738# Examples
3739
3740```
3741use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3742
3743let foo = ", stringify!($atomic_type), "::new(0b0000);
3744assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3745assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3746assert!(foo.bit_toggle(0, Ordering::Relaxed));
3747assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3748```"),
3749                #[inline]
3750                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3751                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3752                    self.inner.bit_toggle(bit, order)
3753                }
3754            }
3755
3756            doc_comment! {
3757                concat!("Logical negates the current value, and sets the new value to the result.
3758
3759Returns the previous value.
3760
3761`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3762of this operation. All ordering modes are possible. Note that using
3763[`Acquire`] makes the store part of this operation [`Relaxed`], and
3764using [`Release`] makes the load part [`Relaxed`].
3765
3766# Examples
3767
3768```
3769use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3770
3771let foo = ", stringify!($atomic_type), "::new(0);
3772assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3773assert_eq!(foo.load(Ordering::Relaxed), !0);
3774```"),
3775                #[inline]
3776                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3777                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3778                    self.inner.fetch_not(order)
3779                }
3780            }
3781
3782            doc_comment! {
3783                concat!("Logical negates the current value, and sets the new value to the result.
3784
3785Unlike `fetch_not`, this does not return the previous value.
3786
3787`not` takes an [`Ordering`] argument which describes the memory ordering
3788of this operation. All ordering modes are possible. Note that using
3789[`Acquire`] makes the store part of this operation [`Relaxed`], and
3790using [`Release`] makes the load part [`Relaxed`].
3791
3792This function may generate more efficient code than `fetch_not` on some platforms.
3793
3794- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3795- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3796
3797# Examples
3798
3799```
3800use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3801
3802let foo = ", stringify!($atomic_type), "::new(0);
3803foo.not(Ordering::Relaxed);
3804assert_eq!(foo.load(Ordering::Relaxed), !0);
3805```"),
3806                #[inline]
3807                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3808                pub fn not(&self, order: Ordering) {
3809                    self.inner.not(order);
3810                }
3811            }
3812
3813            cfg_has_atomic_cas! {
3814            doc_comment! {
3815                concat!("Negates the current value, and sets the new value to the result.
3816
3817Returns the previous value.
3818
3819`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3820of this operation. All ordering modes are possible. Note that using
3821[`Acquire`] makes the store part of this operation [`Relaxed`], and
3822using [`Release`] makes the load part [`Relaxed`].
3823
3824# Examples
3825
3826```
3827use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3828
3829let foo = ", stringify!($atomic_type), "::new(5);
3830assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3831assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3832assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3833assert_eq!(foo.load(Ordering::Relaxed), 5);
3834```"),
3835                #[inline]
3836                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3837                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3838                    self.inner.fetch_neg(order)
3839                }
3840            }
3841
3842            doc_comment! {
3843                concat!("Negates the current value, and sets the new value to the result.
3844
3845Unlike `fetch_neg`, this does not return the previous value.
3846
3847`neg` takes an [`Ordering`] argument which describes the memory ordering
3848of this operation. All ordering modes are possible. Note that using
3849[`Acquire`] makes the store part of this operation [`Relaxed`], and
3850using [`Release`] makes the load part [`Relaxed`].
3851
3852This function may generate more efficient code than `fetch_neg` on some platforms.
3853
3854- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3855
3856# Examples
3857
3858```
3859use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3860
3861let foo = ", stringify!($atomic_type), "::new(5);
3862foo.neg(Ordering::Relaxed);
3863assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3864foo.neg(Ordering::Relaxed);
3865assert_eq!(foo.load(Ordering::Relaxed), 5);
3866```"),
3867                #[inline]
3868                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3869                pub fn neg(&self, order: Ordering) {
3870                    self.inner.neg(order);
3871                }
3872            }
3873            } // cfg_has_atomic_cas!
3874            } // cfg_has_atomic_cas_or_amo32!
3875
3876            const_fn! {
3877                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3878                /// Returns a mutable pointer to the underlying integer.
3879                ///
3880                /// Returning an `*mut` pointer from a shared reference to this atomic is
3881                /// safe because the atomic types work with interior mutability. Any use of
3882                /// the returned raw pointer requires an `unsafe` block and has to uphold
3883                /// the safety requirements. If there is concurrent access, note the following
3884                /// additional safety requirements:
3885                ///
3886                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3887                ///   operations on it must be atomic.
3888                /// - Otherwise, any concurrent operations on it must be compatible with
3889                ///   operations performed by this atomic type.
3890                ///
3891                /// This is `const fn` on Rust 1.58+.
3892                #[inline]
3893                pub const fn as_ptr(&self) -> *mut $int_type {
3894                    self.inner.as_ptr()
3895                }
3896            }
3897        }
3898        // See https://github.com/taiki-e/portable-atomic/issues/180
3899        #[cfg(not(feature = "require-cas"))]
3900        cfg_no_atomic_cas! {
3901        #[doc(hidden)]
3902        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
3903        impl<'a> $atomic_type {
3904            $cfg_no_atomic_cas_or_amo32_or_8! {
3905            #[inline]
3906            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3907            where
3908                &'a Self: HasSwap,
3909            {
3910                unimplemented!()
3911            }
3912            } // $cfg_no_atomic_cas_or_amo32_or_8!
3913            #[inline]
3914            pub fn compare_exchange(
3915                &self,
3916                current: $int_type,
3917                new: $int_type,
3918                success: Ordering,
3919                failure: Ordering,
3920            ) -> Result<$int_type, $int_type>
3921            where
3922                &'a Self: HasCompareExchange,
3923            {
3924                unimplemented!()
3925            }
3926            #[inline]
3927            pub fn compare_exchange_weak(
3928                &self,
3929                current: $int_type,
3930                new: $int_type,
3931                success: Ordering,
3932                failure: Ordering,
3933            ) -> Result<$int_type, $int_type>
3934            where
3935                &'a Self: HasCompareExchangeWeak,
3936            {
3937                unimplemented!()
3938            }
3939            $cfg_no_atomic_cas_or_amo32_or_8! {
3940            #[inline]
3941            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3942            where
3943                &'a Self: HasFetchAdd,
3944            {
3945                unimplemented!()
3946            }
3947            #[inline]
3948            pub fn add(&self, val: $int_type, order: Ordering)
3949            where
3950                &'a Self: HasAdd,
3951            {
3952                unimplemented!()
3953            }
3954            #[inline]
3955            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
3956            where
3957                &'a Self: HasFetchSub,
3958            {
3959                unimplemented!()
3960            }
3961            #[inline]
3962            pub fn sub(&self, val: $int_type, order: Ordering)
3963            where
3964                &'a Self: HasSub,
3965            {
3966                unimplemented!()
3967            }
3968            } // $cfg_no_atomic_cas_or_amo32_or_8!
3969            cfg_no_atomic_cas_or_amo32! {
3970            #[inline]
3971            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
3972            where
3973                &'a Self: HasFetchAnd,
3974            {
3975                unimplemented!()
3976            }
3977            #[inline]
3978            pub fn and(&self, val: $int_type, order: Ordering)
3979            where
3980                &'a Self: HasAnd,
3981            {
3982                unimplemented!()
3983            }
3984            } // cfg_no_atomic_cas_or_amo32!
3985            #[inline]
3986            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
3987            where
3988                &'a Self: HasFetchNand,
3989            {
3990                unimplemented!()
3991            }
3992            cfg_no_atomic_cas_or_amo32! {
3993            #[inline]
3994            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
3995            where
3996                &'a Self: HasFetchOr,
3997            {
3998                unimplemented!()
3999            }
4000            #[inline]
4001            pub fn or(&self, val: $int_type, order: Ordering)
4002            where
4003                &'a Self: HasOr,
4004            {
4005                unimplemented!()
4006            }
4007            #[inline]
4008            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4009            where
4010                &'a Self: HasFetchXor,
4011            {
4012                unimplemented!()
4013            }
4014            #[inline]
4015            pub fn xor(&self, val: $int_type, order: Ordering)
4016            where
4017                &'a Self: HasXor,
4018            {
4019                unimplemented!()
4020            }
4021            } // cfg_no_atomic_cas_or_amo32!
4022            #[inline]
4023            pub fn fetch_update<F>(
4024                &self,
4025                set_order: Ordering,
4026                fetch_order: Ordering,
4027                f: F,
4028            ) -> Result<$int_type, $int_type>
4029            where
4030                F: FnMut($int_type) -> Option<$int_type>,
4031                &'a Self: HasFetchUpdate,
4032            {
4033                unimplemented!()
4034            }
4035            $cfg_no_atomic_cas_or_amo32_or_8! {
4036            #[inline]
4037            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4038            where
4039                &'a Self: HasFetchMax,
4040            {
4041                unimplemented!()
4042            }
4043            #[inline]
4044            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4045            where
4046                &'a Self: HasFetchMin,
4047            {
4048                unimplemented!()
4049            }
4050            } // $cfg_no_atomic_cas_or_amo32_or_8!
4051            cfg_no_atomic_cas_or_amo32! {
4052            #[inline]
4053            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4054            where
4055                &'a Self: HasBitSet,
4056            {
4057                unimplemented!()
4058            }
4059            #[inline]
4060            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4061            where
4062                &'a Self: HasBitClear,
4063            {
4064                unimplemented!()
4065            }
4066            #[inline]
4067            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4068            where
4069                &'a Self: HasBitToggle,
4070            {
4071                unimplemented!()
4072            }
4073            #[inline]
4074            pub fn fetch_not(&self, order: Ordering) -> $int_type
4075            where
4076                &'a Self: HasFetchNot,
4077            {
4078                unimplemented!()
4079            }
4080            #[inline]
4081            pub fn not(&self, order: Ordering)
4082            where
4083                &'a Self: HasNot,
4084            {
4085                unimplemented!()
4086            }
4087            } // cfg_no_atomic_cas_or_amo32!
4088            #[inline]
4089            pub fn fetch_neg(&self, order: Ordering) -> $int_type
4090            where
4091                &'a Self: HasFetchNeg,
4092            {
4093                unimplemented!()
4094            }
4095            #[inline]
4096            pub fn neg(&self, order: Ordering)
4097            where
4098                &'a Self: HasNeg,
4099            {
4100                unimplemented!()
4101            }
4102        }
4103        } // cfg_no_atomic_cas!
4104        $(
4105            #[$cfg_float]
4106            atomic_int!(float, $atomic_float_type, $float_type, $atomic_type, $int_type, $align);
4107        )?
4108    };
4109
4110    // AtomicF* impls
4111    (float,
4112        $atomic_type:ident,
4113        $float_type:ident,
4114        $atomic_int_type:ident,
4115        $int_type:ident,
4116        $align:literal
4117    ) => {
4118        doc_comment! {
4119            concat!("A floating point type which can be safely shared between threads.
4120
4121This type has the same in-memory representation as the underlying floating point type,
4122[`", stringify!($float_type), "`].
4123"
4124            ),
4125            #[cfg_attr(docsrs, doc(cfg(feature = "float")))]
4126            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4127            // will show clearer docs.
4128            #[repr(C, align($align))]
4129            pub struct $atomic_type {
4130                inner: imp::float::$atomic_type,
4131            }
4132        }
4133
4134        impl Default for $atomic_type {
4135            #[inline]
4136            fn default() -> Self {
4137                Self::new($float_type::default())
4138            }
4139        }
4140
4141        impl From<$float_type> for $atomic_type {
4142            #[inline]
4143            fn from(v: $float_type) -> Self {
4144                Self::new(v)
4145            }
4146        }
4147
4148        // UnwindSafe is implicitly implemented.
4149        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4150        impl core::panic::RefUnwindSafe for $atomic_type {}
4151        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4152        impl std::panic::RefUnwindSafe for $atomic_type {}
4153
4154        impl_debug_and_serde!($atomic_type);
4155
4156        impl $atomic_type {
4157            /// Creates a new atomic float.
4158            #[inline]
4159            #[must_use]
4160            pub const fn new(v: $float_type) -> Self {
4161                static_assert_layout!($atomic_type, $float_type);
4162                Self { inner: imp::float::$atomic_type::new(v) }
4163            }
4164
4165            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4166            #[cfg(not(portable_atomic_no_const_mut_refs))]
4167            doc_comment! {
4168                concat!("Creates a new reference to an atomic float from a pointer.
4169
4170This is `const fn` on Rust 1.83+.
4171
4172# Safety
4173
4174* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4175  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4176* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4177* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4178  behind `ptr` must have a happens-before relationship with atomic accesses via
4179  the returned value (or vice-versa).
4180  * In other words, time periods where the value is accessed atomically may not
4181    overlap with periods where the value is accessed non-atomically.
4182  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4183    for the duration of lifetime `'a`. Most use cases should be able to follow
4184    this guideline.
4185  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4186    done from the same thread.
4187* If this atomic type is *not* lock-free:
4188  * Any accesses to the value behind `ptr` must have a happens-before relationship
4189    with accesses via the returned value (or vice-versa).
4190  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4191    be compatible with operations performed by this atomic type.
4192* This method must not be used to create overlapping or mixed-size atomic
4193  accesses, as these are not supported by the memory model.
4194
4195[valid]: core::ptr#safety"),
4196                #[inline]
4197                #[must_use]
4198                pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4199                    #[allow(clippy::cast_ptr_alignment)]
4200                    // SAFETY: guaranteed by the caller
4201                    unsafe { &*(ptr as *mut Self) }
4202                }
4203            }
4204            #[cfg(portable_atomic_no_const_mut_refs)]
4205            doc_comment! {
4206                concat!("Creates a new reference to an atomic float from a pointer.
4207
4208This is `const fn` on Rust 1.83+.
4209
4210# Safety
4211
4212* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4213  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4214* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4215* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4216  behind `ptr` must have a happens-before relationship with atomic accesses via
4217  the returned value (or vice-versa).
4218  * In other words, time periods where the value is accessed atomically may not
4219    overlap with periods where the value is accessed non-atomically.
4220  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4221    for the duration of lifetime `'a`. Most use cases should be able to follow
4222    this guideline.
4223  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4224    done from the same thread.
4225* If this atomic type is *not* lock-free:
4226  * Any accesses to the value behind `ptr` must have a happens-before relationship
4227    with accesses via the returned value (or vice-versa).
4228  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4229    be compatible with operations performed by this atomic type.
4230* This method must not be used to create overlapping or mixed-size atomic
4231  accesses, as these are not supported by the memory model.
4232
4233[valid]: core::ptr#safety"),
4234                #[inline]
4235                #[must_use]
4236                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4237                    #[allow(clippy::cast_ptr_alignment)]
4238                    // SAFETY: guaranteed by the caller
4239                    unsafe { &*(ptr as *mut Self) }
4240                }
4241            }
4242
4243            /// Returns `true` if operations on values of this type are lock-free.
4244            ///
4245            /// If the compiler or the platform doesn't support the necessary
4246            /// atomic instructions, global locks for every potentially
4247            /// concurrent atomic operation will be used.
4248            #[inline]
4249            #[must_use]
4250            pub fn is_lock_free() -> bool {
4251                <imp::float::$atomic_type>::is_lock_free()
4252            }
4253
4254            /// Returns `true` if operations on values of this type are lock-free.
4255            ///
4256            /// If the compiler or the platform doesn't support the necessary
4257            /// atomic instructions, global locks for every potentially
4258            /// concurrent atomic operation will be used.
4259            ///
4260            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4261            /// this type may be lock-free even if the function returns false.
4262            #[inline]
4263            #[must_use]
4264            pub const fn is_always_lock_free() -> bool {
4265                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4266            }
4267            #[cfg(test)]
4268            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4269
4270            const_fn! {
4271                const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4272                /// Returns a mutable reference to the underlying float.
4273                ///
4274                /// This is safe because the mutable reference guarantees that no other threads are
4275                /// concurrently accessing the atomic data.
4276                ///
4277                /// This is `const fn` on Rust 1.83+.
4278                #[inline]
4279                pub const fn get_mut(&mut self) -> &mut $float_type {
4280                    // SAFETY: the mutable reference guarantees unique ownership.
4281                    unsafe { &mut *self.as_ptr() }
4282                }
4283            }
4284
4285            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4286            // https://github.com/rust-lang/rust/issues/76314
4287
4288            const_fn! {
4289                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4290                /// Consumes the atomic and returns the contained value.
4291                ///
4292                /// This is safe because passing `self` by value guarantees that no other threads are
4293                /// concurrently accessing the atomic data.
4294                ///
4295                /// This is `const fn` on Rust 1.56+.
4296                #[inline]
4297                pub const fn into_inner(self) -> $float_type {
4298                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4299                    // so they can be safely transmuted.
4300                    // (const UnsafeCell::into_inner is unstable)
4301                    unsafe { core::mem::transmute(self) }
4302                }
4303            }
4304
4305            /// Loads a value from the atomic float.
4306            ///
4307            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4308            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4309            ///
4310            /// # Panics
4311            ///
4312            /// Panics if `order` is [`Release`] or [`AcqRel`].
4313            #[inline]
4314            #[cfg_attr(
4315                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4316                track_caller
4317            )]
4318            pub fn load(&self, order: Ordering) -> $float_type {
4319                self.inner.load(order)
4320            }
4321
4322            /// Stores a value into the atomic float.
4323            ///
4324            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4325            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4326            ///
4327            /// # Panics
4328            ///
4329            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4330            #[inline]
4331            #[cfg_attr(
4332                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4333                track_caller
4334            )]
4335            pub fn store(&self, val: $float_type, order: Ordering) {
4336                self.inner.store(val, order)
4337            }
4338
4339            cfg_has_atomic_cas_or_amo32! {
4340            /// Stores a value into the atomic float, returning the previous value.
4341            ///
4342            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4343            /// of this operation. All ordering modes are possible. Note that using
4344            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4345            /// using [`Release`] makes the load part [`Relaxed`].
4346            #[inline]
4347            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4348            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4349                self.inner.swap(val, order)
4350            }
4351
4352            cfg_has_atomic_cas! {
4353            /// Stores a value into the atomic float if the current value is the same as
4354            /// the `current` value.
4355            ///
4356            /// The return value is a result indicating whether the new value was written and
4357            /// containing the previous value. On success this value is guaranteed to be equal to
4358            /// `current`.
4359            ///
4360            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4361            /// ordering of this operation. `success` describes the required ordering for the
4362            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4363            /// `failure` describes the required ordering for the load operation that takes place when
4364            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4365            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4366            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4367            ///
4368            /// # Panics
4369            ///
4370            /// Panics if `failure` is [`Release`], [`AcqRel`].
4371            #[inline]
4372            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4373            #[cfg_attr(
4374                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4375                track_caller
4376            )]
4377            pub fn compare_exchange(
4378                &self,
4379                current: $float_type,
4380                new: $float_type,
4381                success: Ordering,
4382                failure: Ordering,
4383            ) -> Result<$float_type, $float_type> {
4384                self.inner.compare_exchange(current, new, success, failure)
4385            }
4386
4387            /// Stores a value into the atomic float if the current value is the same as
4388            /// the `current` value.
4389            /// Unlike [`compare_exchange`](Self::compare_exchange)
4390            /// this function is allowed to spuriously fail even
4391            /// when the comparison succeeds, which can result in more efficient code on some
4392            /// platforms. The return value is a result indicating whether the new value was
4393            /// written and containing the previous value.
4394            ///
4395            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4396            /// ordering of this operation. `success` describes the required ordering for the
4397            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4398            /// `failure` describes the required ordering for the load operation that takes place when
4399            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4400            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4401            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4402            ///
4403            /// # Panics
4404            ///
4405            /// Panics if `failure` is [`Release`], [`AcqRel`].
4406            #[inline]
4407            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4408            #[cfg_attr(
4409                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4410                track_caller
4411            )]
4412            pub fn compare_exchange_weak(
4413                &self,
4414                current: $float_type,
4415                new: $float_type,
4416                success: Ordering,
4417                failure: Ordering,
4418            ) -> Result<$float_type, $float_type> {
4419                self.inner.compare_exchange_weak(current, new, success, failure)
4420            }
4421
4422            /// Adds to the current value, returning the previous value.
4423            ///
4424            /// This operation wraps around on overflow.
4425            ///
4426            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4427            /// of this operation. All ordering modes are possible. Note that using
4428            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4429            /// using [`Release`] makes the load part [`Relaxed`].
4430            #[inline]
4431            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4432            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4433                self.inner.fetch_add(val, order)
4434            }
4435
4436            /// Subtracts from the current value, returning the previous value.
4437            ///
4438            /// This operation wraps around on overflow.
4439            ///
4440            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4441            /// of this operation. All ordering modes are possible. Note that using
4442            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4443            /// using [`Release`] makes the load part [`Relaxed`].
4444            #[inline]
4445            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4446            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4447                self.inner.fetch_sub(val, order)
4448            }
4449
4450            /// Fetches the value, and applies a function to it that returns an optional
4451            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4452            /// `Err(previous_value)`.
4453            ///
4454            /// Note: This may call the function multiple times if the value has been changed from other threads in
4455            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4456            /// only once to the stored value.
4457            ///
4458            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4459            /// The first describes the required ordering for when the operation finally succeeds while the second
4460            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4461            /// [`compare_exchange`](Self::compare_exchange) respectively.
4462            ///
4463            /// Using [`Acquire`] as success ordering makes the store part
4464            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4465            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4466            ///
4467            /// # Panics
4468            ///
4469            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4470            ///
4471            /// # Considerations
4472            ///
4473            /// This method is not magic; it is not provided by the hardware.
4474            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4475            /// and suffers from the same drawbacks.
4476            /// In particular, this method will not circumvent the [ABA Problem].
4477            ///
4478            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4479            #[inline]
4480            #[cfg_attr(
4481                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4482                track_caller
4483            )]
4484            pub fn fetch_update<F>(
4485                &self,
4486                set_order: Ordering,
4487                fetch_order: Ordering,
4488                mut f: F,
4489            ) -> Result<$float_type, $float_type>
4490            where
4491                F: FnMut($float_type) -> Option<$float_type>,
4492            {
4493                let mut prev = self.load(fetch_order);
4494                while let Some(next) = f(prev) {
4495                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4496                        x @ Ok(_) => return x,
4497                        Err(next_prev) => prev = next_prev,
4498                    }
4499                }
4500                Err(prev)
4501            }
4502
4503            /// Maximum with the current value.
4504            ///
4505            /// Finds the maximum of the current value and the argument `val`, and
4506            /// sets the new value to the result.
4507            ///
4508            /// Returns the previous value.
4509            ///
4510            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4511            /// of this operation. All ordering modes are possible. Note that using
4512            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4513            /// using [`Release`] makes the load part [`Relaxed`].
4514            #[inline]
4515            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4516            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4517                self.inner.fetch_max(val, order)
4518            }
4519
4520            /// Minimum with the current value.
4521            ///
4522            /// Finds the minimum of the current value and the argument `val`, and
4523            /// sets the new value to the result.
4524            ///
4525            /// Returns the previous value.
4526            ///
4527            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4528            /// of this operation. All ordering modes are possible. Note that using
4529            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4530            /// using [`Release`] makes the load part [`Relaxed`].
4531            #[inline]
4532            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4533            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4534                self.inner.fetch_min(val, order)
4535            }
4536            } // cfg_has_atomic_cas!
4537
4538            /// Negates the current value, and sets the new value to the result.
4539            ///
4540            /// Returns the previous value.
4541            ///
4542            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4543            /// of this operation. All ordering modes are possible. Note that using
4544            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4545            /// using [`Release`] makes the load part [`Relaxed`].
4546            #[inline]
4547            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4548            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4549                self.inner.fetch_neg(order)
4550            }
4551
4552            /// Computes the absolute value of the current value, and sets the
4553            /// new value to the result.
4554            ///
4555            /// Returns the previous value.
4556            ///
4557            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4558            /// of this operation. All ordering modes are possible. Note that using
4559            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4560            /// using [`Release`] makes the load part [`Relaxed`].
4561            #[inline]
4562            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4563            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4564                self.inner.fetch_abs(order)
4565            }
4566            } // cfg_has_atomic_cas_or_amo32!
4567
4568            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4569            doc_comment! {
4570                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4571
4572See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4573portability of this operation (there are almost no issues).
4574
4575This is `const fn` on Rust 1.58+."),
4576                #[inline]
4577                pub const fn as_bits(&self) -> &$atomic_int_type {
4578                    self.inner.as_bits()
4579                }
4580            }
4581            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4582            doc_comment! {
4583                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4584
4585See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4586portability of this operation (there are almost no issues).
4587
4588This is `const fn` on Rust 1.58+."),
4589                #[inline]
4590                pub fn as_bits(&self) -> &$atomic_int_type {
4591                    self.inner.as_bits()
4592                }
4593            }
4594
4595            const_fn! {
4596                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4597                /// Returns a mutable pointer to the underlying float.
4598                ///
4599                /// Returning an `*mut` pointer from a shared reference to this atomic is
4600                /// safe because the atomic types work with interior mutability. Any use of
4601                /// the returned raw pointer requires an `unsafe` block and has to uphold
4602                /// the safety requirements. If there is concurrent access, note the following
4603                /// additional safety requirements:
4604                ///
4605                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4606                ///   operations on it must be atomic.
4607                /// - Otherwise, any concurrent operations on it must be compatible with
4608                ///   operations performed by this atomic type.
4609                ///
4610                /// This is `const fn` on Rust 1.58+.
4611                #[inline]
4612                pub const fn as_ptr(&self) -> *mut $float_type {
4613                    self.inner.as_ptr()
4614                }
4615            }
4616        }
4617        // See https://github.com/taiki-e/portable-atomic/issues/180
4618        #[cfg(not(feature = "require-cas"))]
4619        cfg_no_atomic_cas! {
4620        #[doc(hidden)]
4621        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4622        impl<'a> $atomic_type {
4623            cfg_no_atomic_cas_or_amo32! {
4624            #[inline]
4625            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4626            where
4627                &'a Self: HasSwap,
4628            {
4629                unimplemented!()
4630            }
4631            } // cfg_no_atomic_cas_or_amo32!
4632            #[inline]
4633            pub fn compare_exchange(
4634                &self,
4635                current: $float_type,
4636                new: $float_type,
4637                success: Ordering,
4638                failure: Ordering,
4639            ) -> Result<$float_type, $float_type>
4640            where
4641                &'a Self: HasCompareExchange,
4642            {
4643                unimplemented!()
4644            }
4645            #[inline]
4646            pub fn compare_exchange_weak(
4647                &self,
4648                current: $float_type,
4649                new: $float_type,
4650                success: Ordering,
4651                failure: Ordering,
4652            ) -> Result<$float_type, $float_type>
4653            where
4654                &'a Self: HasCompareExchangeWeak,
4655            {
4656                unimplemented!()
4657            }
4658            #[inline]
4659            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4660            where
4661                &'a Self: HasFetchAdd,
4662            {
4663                unimplemented!()
4664            }
4665            #[inline]
4666            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4667            where
4668                &'a Self: HasFetchSub,
4669            {
4670                unimplemented!()
4671            }
4672            #[inline]
4673            pub fn fetch_update<F>(
4674                &self,
4675                set_order: Ordering,
4676                fetch_order: Ordering,
4677                f: F,
4678            ) -> Result<$float_type, $float_type>
4679            where
4680                F: FnMut($float_type) -> Option<$float_type>,
4681                &'a Self: HasFetchUpdate,
4682            {
4683                unimplemented!()
4684            }
4685            #[inline]
4686            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4687            where
4688                &'a Self: HasFetchMax,
4689            {
4690                unimplemented!()
4691            }
4692            #[inline]
4693            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4694            where
4695                &'a Self: HasFetchMin,
4696            {
4697                unimplemented!()
4698            }
4699            cfg_no_atomic_cas_or_amo32! {
4700            #[inline]
4701            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4702            where
4703                &'a Self: HasFetchNeg,
4704            {
4705                unimplemented!()
4706            }
4707            #[inline]
4708            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4709            where
4710                &'a Self: HasFetchAbs,
4711            {
4712                unimplemented!()
4713            }
4714            } // cfg_no_atomic_cas_or_amo32!
4715        }
4716        } // cfg_no_atomic_cas!
4717    };
4718}
4719
4720cfg_has_atomic_ptr! {
4721    #[cfg(target_pointer_width = "16")]
4722    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4723    #[cfg(target_pointer_width = "16")]
4724    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4725    #[cfg(target_pointer_width = "32")]
4726    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4727    #[cfg(target_pointer_width = "32")]
4728    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4729    #[cfg(target_pointer_width = "64")]
4730    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4731    #[cfg(target_pointer_width = "64")]
4732    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4733    #[cfg(target_pointer_width = "128")]
4734    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4735    #[cfg(target_pointer_width = "128")]
4736    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4737}
4738
4739cfg_has_atomic_8! {
4740    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4741    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4742}
4743cfg_has_atomic_16! {
4744    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4745    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4746        // TODO: support once https://github.com/rust-lang/rust/issues/116909 stabilized.
4747        // #[cfg(all(feature = "float", not(portable_atomic_no_f16)))] AtomicF16, f16);
4748}
4749cfg_has_atomic_32! {
4750    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4751    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4752        #[cfg(feature = "float")] AtomicF32, f32);
4753}
4754cfg_has_atomic_64! {
4755    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4756    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4757        #[cfg(feature = "float")] AtomicF64, f64);
4758}
4759cfg_has_atomic_128! {
4760    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4761    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4762        // TODO: support once https://github.com/rust-lang/rust/issues/116909 stabilized.
4763        // #[cfg(all(feature = "float", not(portable_atomic_no_f128)))] AtomicF128, f128);
4764}
4765
4766// See https://github.com/taiki-e/portable-atomic/issues/180
4767#[cfg(not(feature = "require-cas"))]
4768cfg_no_atomic_cas! {
4769cfg_no_atomic_cas_or_amo32! {
4770#[cfg(feature = "float")]
4771use self::diagnostic_helper::HasFetchAbs;
4772use self::diagnostic_helper::{
4773    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4774    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4775};
4776} // cfg_no_atomic_cas_or_amo32!
4777cfg_no_atomic_cas_or_amo8! {
4778use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4779} // cfg_no_atomic_cas_or_amo8!
4780#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4781use self::diagnostic_helper::{
4782    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4783    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4784};
4785#[cfg_attr(
4786    any(
4787        all(
4788            portable_atomic_no_atomic_load_store,
4789            not(any(
4790                target_arch = "avr",
4791                target_arch = "bpf",
4792                target_arch = "msp430",
4793                target_arch = "riscv32",
4794                target_arch = "riscv64",
4795                feature = "critical-section",
4796            )),
4797        ),
4798        not(feature = "float"),
4799    ),
4800    allow(dead_code, unreachable_pub)
4801)]
4802mod diagnostic_helper {
4803    cfg_no_atomic_cas_or_amo8! {
4804    #[doc(hidden)]
4805    #[cfg_attr(
4806        not(portable_atomic_no_diagnostic_namespace),
4807        diagnostic::on_unimplemented(
4808            message = "`swap` requires atomic CAS but not available on this target by default",
4809            label = "this associated function is not available on this target by default",
4810            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4811            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4812        )
4813    )]
4814    pub trait HasSwap {}
4815    } // cfg_no_atomic_cas_or_amo8!
4816    #[doc(hidden)]
4817    #[cfg_attr(
4818        not(portable_atomic_no_diagnostic_namespace),
4819        diagnostic::on_unimplemented(
4820            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4821            label = "this associated function is not available on this target by default",
4822            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4823            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4824        )
4825    )]
4826    pub trait HasCompareExchange {}
4827    #[doc(hidden)]
4828    #[cfg_attr(
4829        not(portable_atomic_no_diagnostic_namespace),
4830        diagnostic::on_unimplemented(
4831            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4832            label = "this associated function is not available on this target by default",
4833            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4834            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4835        )
4836    )]
4837    pub trait HasCompareExchangeWeak {}
4838    #[doc(hidden)]
4839    #[cfg_attr(
4840        not(portable_atomic_no_diagnostic_namespace),
4841        diagnostic::on_unimplemented(
4842            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4843            label = "this associated function is not available on this target by default",
4844            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4845            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4846        )
4847    )]
4848    pub trait HasFetchAdd {}
4849    cfg_no_atomic_cas_or_amo8! {
4850    #[doc(hidden)]
4851    #[cfg_attr(
4852        not(portable_atomic_no_diagnostic_namespace),
4853        diagnostic::on_unimplemented(
4854            message = "`add` requires atomic CAS but not available on this target by default",
4855            label = "this associated function is not available on this target by default",
4856            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4857            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4858        )
4859    )]
4860    pub trait HasAdd {}
4861    } // cfg_no_atomic_cas_or_amo8!
4862    #[doc(hidden)]
4863    #[cfg_attr(
4864        not(portable_atomic_no_diagnostic_namespace),
4865        diagnostic::on_unimplemented(
4866            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4867            label = "this associated function is not available on this target by default",
4868            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4869            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4870        )
4871    )]
4872    pub trait HasFetchSub {}
4873    cfg_no_atomic_cas_or_amo8! {
4874    #[doc(hidden)]
4875    #[cfg_attr(
4876        not(portable_atomic_no_diagnostic_namespace),
4877        diagnostic::on_unimplemented(
4878            message = "`sub` requires atomic CAS but not available on this target by default",
4879            label = "this associated function is not available on this target by default",
4880            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4881            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4882        )
4883    )]
4884    pub trait HasSub {}
4885    } // cfg_no_atomic_cas_or_amo8!
4886    cfg_no_atomic_cas_or_amo32! {
4887    #[doc(hidden)]
4888    #[cfg_attr(
4889        not(portable_atomic_no_diagnostic_namespace),
4890        diagnostic::on_unimplemented(
4891            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4892            label = "this associated function is not available on this target by default",
4893            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4894            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4895        )
4896    )]
4897    pub trait HasFetchPtrAdd {}
4898    #[doc(hidden)]
4899    #[cfg_attr(
4900        not(portable_atomic_no_diagnostic_namespace),
4901        diagnostic::on_unimplemented(
4902            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4903            label = "this associated function is not available on this target by default",
4904            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4905            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4906        )
4907    )]
4908    pub trait HasFetchPtrSub {}
4909    #[doc(hidden)]
4910    #[cfg_attr(
4911        not(portable_atomic_no_diagnostic_namespace),
4912        diagnostic::on_unimplemented(
4913            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4914            label = "this associated function is not available on this target by default",
4915            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4916            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4917        )
4918    )]
4919    pub trait HasFetchByteAdd {}
4920    #[doc(hidden)]
4921    #[cfg_attr(
4922        not(portable_atomic_no_diagnostic_namespace),
4923        diagnostic::on_unimplemented(
4924            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4925            label = "this associated function is not available on this target by default",
4926            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4927            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4928        )
4929    )]
4930    pub trait HasFetchByteSub {}
4931    #[doc(hidden)]
4932    #[cfg_attr(
4933        not(portable_atomic_no_diagnostic_namespace),
4934        diagnostic::on_unimplemented(
4935            message = "`fetch_and` requires atomic CAS but not available on this target by default",
4936            label = "this associated function is not available on this target by default",
4937            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4938            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4939        )
4940    )]
4941    pub trait HasFetchAnd {}
4942    #[doc(hidden)]
4943    #[cfg_attr(
4944        not(portable_atomic_no_diagnostic_namespace),
4945        diagnostic::on_unimplemented(
4946            message = "`and` requires atomic CAS but not available on this target by default",
4947            label = "this associated function is not available on this target by default",
4948            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4949            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4950        )
4951    )]
4952    pub trait HasAnd {}
4953    } // cfg_no_atomic_cas_or_amo32!
4954    #[doc(hidden)]
4955    #[cfg_attr(
4956        not(portable_atomic_no_diagnostic_namespace),
4957        diagnostic::on_unimplemented(
4958            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
4959            label = "this associated function is not available on this target by default",
4960            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4961            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4962        )
4963    )]
4964    pub trait HasFetchNand {}
4965    cfg_no_atomic_cas_or_amo32! {
4966    #[doc(hidden)]
4967    #[cfg_attr(
4968        not(portable_atomic_no_diagnostic_namespace),
4969        diagnostic::on_unimplemented(
4970            message = "`fetch_or` requires atomic CAS but not available on this target by default",
4971            label = "this associated function is not available on this target by default",
4972            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4973            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4974        )
4975    )]
4976    pub trait HasFetchOr {}
4977    #[doc(hidden)]
4978    #[cfg_attr(
4979        not(portable_atomic_no_diagnostic_namespace),
4980        diagnostic::on_unimplemented(
4981            message = "`or` requires atomic CAS but not available on this target by default",
4982            label = "this associated function is not available on this target by default",
4983            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4984            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4985        )
4986    )]
4987    pub trait HasOr {}
4988    #[doc(hidden)]
4989    #[cfg_attr(
4990        not(portable_atomic_no_diagnostic_namespace),
4991        diagnostic::on_unimplemented(
4992            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
4993            label = "this associated function is not available on this target by default",
4994            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4995            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4996        )
4997    )]
4998    pub trait HasFetchXor {}
4999    #[doc(hidden)]
5000    #[cfg_attr(
5001        not(portable_atomic_no_diagnostic_namespace),
5002        diagnostic::on_unimplemented(
5003            message = "`xor` requires atomic CAS but not available on this target by default",
5004            label = "this associated function is not available on this target by default",
5005            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5006            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5007        )
5008    )]
5009    pub trait HasXor {}
5010    #[doc(hidden)]
5011    #[cfg_attr(
5012        not(portable_atomic_no_diagnostic_namespace),
5013        diagnostic::on_unimplemented(
5014            message = "`fetch_not` requires atomic CAS but not available on this target by default",
5015            label = "this associated function is not available on this target by default",
5016            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5017            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5018        )
5019    )]
5020    pub trait HasFetchNot {}
5021    #[doc(hidden)]
5022    #[cfg_attr(
5023        not(portable_atomic_no_diagnostic_namespace),
5024        diagnostic::on_unimplemented(
5025            message = "`not` requires atomic CAS but not available on this target by default",
5026            label = "this associated function is not available on this target by default",
5027            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5028            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5029        )
5030    )]
5031    pub trait HasNot {}
5032    } // cfg_no_atomic_cas_or_amo32!
5033    #[doc(hidden)]
5034    #[cfg_attr(
5035        not(portable_atomic_no_diagnostic_namespace),
5036        diagnostic::on_unimplemented(
5037            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5038            label = "this associated function is not available on this target by default",
5039            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5040            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5041        )
5042    )]
5043    pub trait HasFetchNeg {}
5044    #[doc(hidden)]
5045    #[cfg_attr(
5046        not(portable_atomic_no_diagnostic_namespace),
5047        diagnostic::on_unimplemented(
5048            message = "`neg` requires atomic CAS but not available on this target by default",
5049            label = "this associated function is not available on this target by default",
5050            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5051            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5052        )
5053    )]
5054    pub trait HasNeg {}
5055    cfg_no_atomic_cas_or_amo32! {
5056    #[cfg(feature = "float")]
5057    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5058    #[doc(hidden)]
5059    #[cfg_attr(
5060        not(portable_atomic_no_diagnostic_namespace),
5061        diagnostic::on_unimplemented(
5062            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5063            label = "this associated function is not available on this target by default",
5064            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5065            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5066        )
5067    )]
5068    pub trait HasFetchAbs {}
5069    } // cfg_no_atomic_cas_or_amo32!
5070    #[doc(hidden)]
5071    #[cfg_attr(
5072        not(portable_atomic_no_diagnostic_namespace),
5073        diagnostic::on_unimplemented(
5074            message = "`fetch_min` requires atomic CAS but not available on this target by default",
5075            label = "this associated function is not available on this target by default",
5076            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5077            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5078        )
5079    )]
5080    pub trait HasFetchMin {}
5081    #[doc(hidden)]
5082    #[cfg_attr(
5083        not(portable_atomic_no_diagnostic_namespace),
5084        diagnostic::on_unimplemented(
5085            message = "`fetch_max` requires atomic CAS but not available on this target by default",
5086            label = "this associated function is not available on this target by default",
5087            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5088            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5089        )
5090    )]
5091    pub trait HasFetchMax {}
5092    #[doc(hidden)]
5093    #[cfg_attr(
5094        not(portable_atomic_no_diagnostic_namespace),
5095        diagnostic::on_unimplemented(
5096            message = "`fetch_update` requires atomic CAS but not available on this target by default",
5097            label = "this associated function is not available on this target by default",
5098            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5099            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5100        )
5101    )]
5102    pub trait HasFetchUpdate {}
5103    cfg_no_atomic_cas_or_amo32! {
5104    #[doc(hidden)]
5105    #[cfg_attr(
5106        not(portable_atomic_no_diagnostic_namespace),
5107        diagnostic::on_unimplemented(
5108            message = "`bit_set` requires atomic CAS but not available on this target by default",
5109            label = "this associated function is not available on this target by default",
5110            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5111            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5112        )
5113    )]
5114    pub trait HasBitSet {}
5115    #[doc(hidden)]
5116    #[cfg_attr(
5117        not(portable_atomic_no_diagnostic_namespace),
5118        diagnostic::on_unimplemented(
5119            message = "`bit_clear` requires atomic CAS but not available on this target by default",
5120            label = "this associated function is not available on this target by default",
5121            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5122            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5123        )
5124    )]
5125    pub trait HasBitClear {}
5126    #[doc(hidden)]
5127    #[cfg_attr(
5128        not(portable_atomic_no_diagnostic_namespace),
5129        diagnostic::on_unimplemented(
5130            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5131            label = "this associated function is not available on this target by default",
5132            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5133            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5134        )
5135    )]
5136    pub trait HasBitToggle {}
5137    } // cfg_no_atomic_cas_or_amo32!
5138}
5139} // cfg_no_atomic_cas!