Skip to content

Latest commit

 

History

History
133 lines (103 loc) · 4 KB

File metadata and controls

133 lines (103 loc) · 4 KB

Rust Development Guide

1. Environment setup

  • Install the Rust toolchain with rustup.
  • Keep the stable toolchain as your default unless you have a clear nightly-only need.
  • Install common components:
    • rustfmt for formatting
    • clippy for lints
    • rust-docs for offline docs
  • Useful commands:
rustup show
cargo --version
rustc --version
cargo fmt
cargo clippy --all-targets --all-features
cargo test
cargo doc --open

2. Cargo workflow

  • cargo new app-name: create a binary application.
  • cargo new library-name --lib: create a library crate.
  • cargo init: initialize the current directory.
  • cargo run --example name: execute a binary in examples/.
  • cargo check: fast compile feedback without building final artifacts.
  • cargo build --release: optimized build for production or benchmarking.

3. Recommended inner loop

  1. Write or adjust a test.
  2. Implement the smallest useful change.
  3. Run cargo fmt.
  4. Run cargo clippy --all-targets --all-features.
  5. Run cargo test.
  6. Refactor only after the behavior is correct.

4. Project structure

Use a simple structure first:

src/
  lib.rs or main.rs
  domain.rs
  parsing.rs
  storage.rs
  errors.rs
tests/
examples/
benches/

Recommended separation:

  • Domain logic: types, invariants, business rules.
  • Adapters: file system, network, database, terminal, environment.
  • Parsing and serialization: input/output transformations.
  • Errors: domain-specific errors plus context conversion.
  • Runtime glue: main, async runtime setup, CLI wiring, dependency injection.

5. API design principles

  • Prefer small functions with clear ownership.
  • Return borrowed data only when the lifetime model stays obvious.
  • Start concrete, then generalize with traits or generics.
  • Model illegal states out of existence using enums and newtypes.
  • Use Option for absence and Result for failure.

6. Error handling strategy

  • Library code should preserve useful error information.
  • Application code should add context near I/O boundaries.
  • Avoid unwrap in production paths unless failure is impossible by construction.
  • Prefer error messages that explain the failed action, not just the raw value.

7. Testing strategy

  • Unit tests for business rules and small invariants.
  • Integration tests for external behavior and public API.
  • Doc tests for examples and usage snippets.
  • Property-style thinking: test categories, not just hand-picked cases.

Useful commands:

cargo test
cargo test module_name
cargo test -- --nocapture

8. Documentation strategy

  • Put crate-level docs in lib.rs.
  • Document public types and important invariants.
  • Add examples to public APIs that are likely to be copied into real code.
  • Use cargo doc regularly; unclear docs often reveal unclear design.

9. Performance strategy

  • Measure before optimizing.
  • Use cargo check for fast feedback and cargo build --release for real performance testing.
  • Prefer algorithm and allocation improvements before micro-optimizations.
  • Avoid cloning unless ownership or performance tradeoffs are understood.
  • Reach for iterators, slices, and borrowed data before building intermediate containers.

10. Concurrency and async guidance

  • Use threads when work is CPU-bound or parallel.
  • Use async when work is mostly waiting on I/O.
  • Minimize shared mutable state; prefer ownership transfer or message passing.
  • Keep sync and async boundaries explicit.

11. Unsafe code rules

  • Unsafe should be rare, tiny, documented, and wrapped in a safe API.
  • Write down the invariants before writing the unsafe block.
  • Test the safe wrapper, not just the unsafe lines.
  • If you cannot clearly explain the safety argument, stop and simplify.

12. Production checklist

  • Formatting and lints pass.
  • Tests are meaningful and stable.
  • Errors have context.
  • Public APIs are documented.
  • Panics are intentional.
  • Logging and metrics are placed at operational boundaries.
  • Configuration is validated early.
  • Release builds and target platforms are verified.