diff --git a/README.md b/README.md index 23c227c..14e8646 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,14 @@ -# CloudWeGo-Monolake +# Project Archival Notice +This project is no longer recommended for use and has been officially archived. The decision to archive is driven by the following key factors: +- Insufficient engineering resource investment: We currently lack the dedicated engineering manpower required to support ongoing development, bug fixes, and feature enhancements. +- Challenges in sustaining open-source maintenance and growth: Due to the above constraints, it has become impractical to continue maintaining the project at a standard that meets open-source community expectations, nor to drive its long-term development. + +We sincerely appreciate the interest and any contributions this project has received from the community. For users seeking similar functionality, we recommend exploring alternative open-source solutions that are actively maintained and aligned with current technical standards. + +## CloudWeGo-Monolake English | [简体中文](README_zh.md) -[![WebSite](https://img.shields.io/website?up_message=cloudwego&url=https%3A%2F%2Fwww.cloudwego.io%2F)](https://www.cloudwego.io/) [![License](https://img.shields.io/github/license/cloudwego/monolake)](https://github.com/cloudwego/monolake/blob/main/LICENSE) [![OpenIssue](https://img.shields.io/github/issues/cloudwego/monolake)](https://github.com/cloudwego/monolake/issues) [![ClosedIssue](https://img.shields.io/github/issues-closed/cloudwego/monolake)](https://github.com/cloudwego/monolake/issues?q=is%3Aissue+is%3Aclosed) @@ -64,13 +70,13 @@ The Monolake framework has been used to build various high-performance proxies a ## Documentation -- [**Getting Started**](https://www.cloudwego.io/docs/monolake/getting-started/) +- [**Getting Started**](./docs/getting-started/) -- [**Architecture**](https://www.cloudwego.io/docs/monolake/architecture/) +- [**Architecture**](./docs/architecture/) -- [**Developer guide**](https://www.cloudwego.io/docs/monolake/tutorial/) +- [**Developer guide**](./docs/tutorial/) -- [**Config guide**](https://www.cloudwego.io/docs/monolake/config-guide/) +- [**Config guide**](./docs/config-guide/) ## Related Crates diff --git a/README_zh.md b/README_zh.md index a8d11ae..fd7cbbb 100644 --- a/README_zh.md +++ b/README_zh.md @@ -2,7 +2,6 @@ 简体中文 | [English](README.md) -[![网站状态](https://img.shields.io/website?up_message=cloudwego&url=https%3A%2F%2Fwww.cloudwego.io%2F)](https://www.cloudwego.io/) [![许可证](https://img.shields.io/github/license/cloudwego/monolake)](https://github.com/cloudwego/monolake/blob/main/LICENSE) [![开放议题](https://img.shields.io/github/issues/cloudwego/monolake)](https://github.com/cloudwego/monolake/issues) [![已关闭议题](https://img.shields.io/github/issues-closed/cloudwego/monolake)](https://github.com/cloudwego/monolake/issues?q=is%3Aissue+is%3Aclosed) @@ -64,10 +63,10 @@ Monolake 框架已用于构建多种高性能代理和网关,**并积极部署 ## 文档 -- [**快速开始**](https://www.cloudwego.io/zh/docs/monolake/getting-started/) -- [**架构设计**](https://www.cloudwego.io/zh/docs/monolake/architecture/) -- [**开发指南**](https://www.cloudwego.io/zh/docs/monolake/tutorial/) -- [**配置指南**](https://www.cloudwego.io/zh/docs/monolake/config-guid/) +- [**快速开始**](./docs/getting-started/) +- [**架构设计**](./docs/architecture/) +- [**开发指南**](./docs/tutorial/) +- [**配置指南**](./docs/config-guide/) ## 相关组件 diff --git a/docs/architecture/_index.md b/docs/architecture/_index.md new file mode 100644 index 0000000..faad831 --- /dev/null +++ b/docs/architecture/_index.md @@ -0,0 +1,9 @@ +--- +title: "Architecture" +linkTitle: "Architecture" +weight: 3 +keywords: ["Proxy", "Rust", "io_uring", "Architecture"] +description: "Architecture and design deep dive" +--- + + diff --git a/docs/architecture/connector.md b/docs/architecture/connector.md new file mode 100644 index 0000000..fbe72ee --- /dev/null +++ b/docs/architecture/connector.md @@ -0,0 +1,55 @@ +--- +title: "Connectors" +linkTitle: "Connectors" +weight: 3 +description: "Deep dive into monoio-transport's modular connector architecture, connection composition patterns, and layered network protocols" +--- + +# Connector Trait + +The core of the [monoio-transports](https://docs.rs/monoio-transports/latest/monoio_transports/) crate is its modular and composable connector architecture, which allows developers to easily build complex, high-performance network communication solutions. + +At the heart of this design is the [Connector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/trait.Connector.html) trait, which defines a common interface for establishing network connections: + +```rust +pub trait Connector { + type Connection; + type Error; + fn connect(&self, key: K) -> impl Future>; +} +``` + +## Stacking Connectors + +Connectors can be easily composed and stacked to create complex connection setups. For example, let's say you want to create an HTTPS connector that supports both HTTP/1.1 and HTTP/2 protocol + +```rust +use monoio_transports::{ + connectors::{TcpConnector, TlsConnector}, + HttpConnector, +}; + +// Create a TCP connector +let tcp_connector = TcpConnector::default(); + +// Create a TLS connector on top of the TCP connector, with custom ALPN protocols +let tls_connector = TlsConnector::new_with_tls_default(tcp_connector, Some(vec!["http/1.1", "h2"])); + +// Create an HTTP connector on top of the TLS connector, supporting both HTTP/1.1 and HTTP/2 +let https_connector: HttpConnector, _, _> = HttpConnector::default(); +``` + +In this example, we start with a basic [TcpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TcpConnector.html), add a [TlsConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TlsConnector.html) on top of it to provide TLS encryption, and then wrap the whole stack with an HttpConnector to handle both HTTP/1.1 and HTTP/2 protocols. This modular approach allows you to easily customize the connector stack to suit your specific needs. + +# Connector Types + +The [monoio-transports](https://docs.rs/monoio-transports/latest/monoio_transports/) crate provides several pre-built connector types that you can use as building blocks for your own solutions. Here's a table outlining the available connectors: + +| Connector Type | Description | +|---------------|-------------| +| [TcpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TcpConnector.html) | Establishes TCP connections | +| [UnixConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.UnixConnector.html) | Establishes Unix domain socket connections | +| [TlsConnector](https://docs.rs/monoio-transports/latest/monoio_transports/connectors/struct.TlsConnector.html) | Adds TLS encryption to an underlying L4 connector, supporting both native-tls and rustls backends | +| [HttpConnector](https://docs.rs/monoio-transports/latest/monoio_transports/http/struct.HttpConnector.html) | Handles HTTP protocol negotiation and connection setup | +| [PooledConnector](https://docs.rs/monoio-transports/latest/monoio_transports/pool/struct.PooledConnector.html) | Provides connection pooling capabilities for any underlying connector | + diff --git a/docs/architecture/context.md b/docs/architecture/context.md new file mode 100644 index 0000000..16306fb --- /dev/null +++ b/docs/architecture/context.md @@ -0,0 +1,74 @@ +--- +title: "Context Management" +linkTitle: "Context" +weight: 4 +--- + +# `certain_map` + +In a service-oriented architecture, managing the context data that flows between different services is a critical aspect of the system design. The [`certain_map`](https://docs.rs/certain-map/latest/certain_map/) crate provides a powerful way to define and work with typed context data, ensuring the existence of required information at compile-time. + +## The Problem `certain_map` Solves + +When building modular services, it's common to have indirect data dependencies between components. For example, a downstream service may require information that was originally provided in an upstream request, but the intermediate services don't directly use that data. Traditionally, this would involve passing all potentially relevant data through the request/response types, which can quickly become unwieldy and error-prone. + +Alternatively, you might use a `HashMap` to manage the context data, but this approach has a significant drawback: you cannot ensure at compile-time that the required key-value pairs have been set when the data is read. This can lead to unnecessary error handling branches or even panics in your program. + +## How `certain_map` Helps + +The `certain_map` crate solves this problem by providing a a typed-map-like struct that ensures the existence of specific items at compile-time. When you define a `Context` struct using `certain_map`, the compiler will enforce that certain fields are present, preventing runtime errors and simplifying the implementation of your services. + +Here's an example of how you might set up the context for your project: + +```rust +certain_map::certain_map! { + #[derive(Debug, Clone)] + #[empty(EmptyContext)] + #[full(FullContext)] + pub struct Context { + peer_addr: PeerAddr, + remote_addr: Option, + } +} +``` + +In this example, the `Context` struct has two fields: `peer_addr` of type `PeerAddr`, and `remote_addr` of type `Option`. The `#[empty(EmptyContext)]` and `#[full(FullContext)]` attributes define the type aliases for the empty and full versions of the context, respectively. + +The key benefits of using `certain_map` for your context management are: + +1. **Compile-time Guarantees**: The compiler will ensure that the necessary fields are present in the `Context` struct, preventing runtime errors and simplifying the implementation of your services. + +2. **Modularity and Composability**: By defining a clear context structure, you can more easily compose services together, as each service can specify the context data it requires using trait bounds. + +3. **Flexibility**: The `certain_map` crate provides a set of traits (`ParamSet`, `ParamRef`, `ParamTake`, etc.) that allow you to easily manipulate the context data, such as adding, removing, or modifying fields. + +4. **Reduced Boilerplate**: Instead of manually creating and managing structs to hold the context data, the `certain_map` crate generates the necessary code for you, reducing the amount of boilerplate in your project. + +## Using `certain_map` in Your Services + +Once you've defined your `Context` struct, you can use it in your services to ensure that the required data is available. For example, consider the following `UpstreamHandler` service: + +```rust +impl Service<(Request, CX)> for UpstreamHandler +where + CX: ParamRef + ParamMaybeRef>, + B: Body, + HttpError: From, +{ + type Response = ResponseWithContinue; + type Error = Infallible; + + async fn call(&self, (mut req, ctx): (Request, CX)) -> Result { + add_xff_header(req.headers_mut(), &ctx); + #[cfg(feature = "tls")] + if req.uri().scheme() == Some(&http::uri::Scheme::HTTPS) { + return self.send_https_request(req).await; + } + self.send_http_request(req).await + } +} +``` + +In this example, the `UpstreamHandler` service expects the `Context` to contain the `PeerAddr` and optionally the `RemoteAddr`. The trait bounds `ParamRef` and `ParamMaybeRef>` ensure that these fields are available at compile-time, preventing potential runtime errors. + +By using `certain_map` to manage your context data, you can improve the modularity, maintainability, and robustness of your service-oriented architecture. \ No newline at end of file diff --git a/docs/architecture/monolake_factory_stack.png b/docs/architecture/monolake_factory_stack.png new file mode 100644 index 0000000..bf145ce Binary files /dev/null and b/docs/architecture/monolake_factory_stack.png differ diff --git a/docs/architecture/monolake_service.jpeg b/docs/architecture/monolake_service.jpeg new file mode 100644 index 0000000..2e9facf Binary files /dev/null and b/docs/architecture/monolake_service.jpeg differ diff --git a/docs/architecture/runtime.md b/docs/architecture/runtime.md new file mode 100644 index 0000000..f0080cd --- /dev/null +++ b/docs/architecture/runtime.md @@ -0,0 +1,115 @@ +--- +title: "Runtime" +linkTitle: "Runtime" +weight: 1 +description: "Deep dive into Monolake's io_uring-based runtime and performance characteristics compared to traditional event-based runtimes" +--- + +## Runtime + +In asynchronous Rust programs, a runtime serves as the backbone for executing asynchronous tasks. It manages the scheduling, execution, and polling of these tasks while handling I/O operations efficiently. A well-designed runtime is crucial for achieving optimal performance, particularly in I/O-bound workloads. + +Monoio is a new, pure io-uring-based Rust asynchronous runtime that has been specifically designed to maximize efficiency and performance for I/O-bound tasks. By leveraging the advanced capabilities of io-uring directly, Monoio stands apart from other runtimes like Tokio-uring, which may operate on top of additional runtime layers. This direct integration with io-uring allows Monoio to take full advantage of the kernel's asynchronous I/O capabilities, resulting in improved performance metrics and reduced latency. + +## Thread-Per-Core Model + +One of the defining characteristics of Monoio is its thread-per-core architecture. Each core of the CPU runs a dedicated thread, allowing the runtime to avoid the complexities associated with shared data across multiple threads. This design choice means that users do not need to worry about whether their tasks implement Send or Sync, as data does not escape the thread at await points. This significantly simplifies concurrent programming. In contrast, Tokio utilizes a multi-threaded work-stealing scheduler. In this model, tasks can be migrated between threads, introducing complexities related to synchronization and data sharing. For example, a task scheduled in Tokio might be executed on any available thread, leading to potential context switching overhead. + +## Event Notification vs. Completion-Based Runtimes + +When working with asynchronous I/O in Rust, understanding the underlying mechanisms of different runtimes is crucial. Two prominent approaches are the io-uring-based runtimes(Monolake) and the traditional event notification based runtimes (Tokio, async-std) which use mechanisms like kequeue and epoll. The fundamental difference between these two models lies in how they manage resource ownership and I/O operations. + +io_uring operates on a submission-based model, where the ownership of resources (such as buffers) is transferred to the kernel upon submission of an I/O request. This model allows for high performance and reduced context switching, as the kernel can process the requests asynchronously. When an I/O operation is completed, the ownership of the buffers is returned to the caller. This ownership transfer leads to several implications: + +1. **Ownership Semantics**: In io-uring, since the kernel takes ownership of the buffers during the operation, it allows for more efficient memory management. The caller does not need to manage the lifecycle of the buffers while the operation is in progress. + +2. **Concurrency Model**: The submission-based model allows for a more straightforward handling of concurrency, as multiple I/O operations can be submitted without waiting for each to complete. This can lead to improved throughput, especially in I/O-bound applications. + +In contrast, Tokio employs systems like kequeue and epoll. In this model, the application maintains ownership of the buffers throughout the lifetime of the I/O operation. Instead of transferring ownership, Tokio merely borrows the buffers, which has several implications: + +1. **Buffer Management**: Since Tokio borrows buffers, the application is responsible for managing their lifecycle. This can introduce complexity, especially when dealing with concurrent I/O operations, as developers must ensure that buffers are not inadvertently reused while still in use. + +2. **Polling Mechanism**: The polling model in Tokio requires the application to actively wait for events, which can result in increased context switches and potentially less efficient use of system resources compared to the submission-based model of io-uring. + +## Async IO Trait divergence + +Due to these fundamental differences in how I/O operations are managed, the async I/O traits for Tokio and Monoio diverge significantly. Tokio’s APIs are built around the concepts of futures and asynchronous borrowing, while the io-uring APIs in Monoio follow a submission and completion model that emphasizes ownership transfer. In Tokio’s read/write traits, buffers are borrowed or mutably borrowed. In contrast, Monoio’s async traits involve transferring ownership of the buffers, which are returned to the caller upon completion of the operation. +To achieve this high level of efficiency, Monoio utilizes certain unstable Rust features and introduces a new I/O abstraction that is not compatible with Tokio's async I/O traits, which are the de facto standard in Rust. This new abstraction is represented through the AsyncReadRent and AsyncWriteRent traits: + +
+
+

Native traits

+{{< highlight rust >}} +pub trait AsyncWriteRent { + // Required methods + fn write( + &mut self, + buf: T + ) -> impl Future>; + fn writev( + &mut self, + buf_vec: T + ) -> impl Future>; + fn flush(&mut self) -> impl Future>; + fn shutdown(&mut self) -> impl Future>; +} +pub trait AsyncReadRent { + // Required methods + fn read( + &mut self, + buf: T + ) -> impl Future>; + fn readv( + &mut self, + buf: T + ) -> impl Future>; +} +{{< /highlight >}} +
+
+

Tokio traits

+{{< highlight rust >}} +pub trait AsyncRead { + // Required method + fn poll_read( + self: Pin<&mut Self>, + cx: &mut Context<'_>, + buf: &mut ReadBuf<'_> + ) -> Poll>; +} +pub trait AsyncWrite { + // Required methods + fn poll_write( + self: Pin<&mut Self>, + cx: &mut Context<'_>, + buf: &[u8] + ) -> Poll>; + fn poll_flush( + self: Pin<&mut Self>, + cx: &mut Context<'_> + ) -> Poll>; + fn poll_shutdown( + self: Pin<&mut Self>, + cx: &mut Context<'_> + ) -> Poll>; +} +{{< /highlight >}} +
+
+ + \ No newline at end of file diff --git a/docs/architecture/service.md b/docs/architecture/service.md new file mode 100644 index 0000000..c4994b0 --- /dev/null +++ b/docs/architecture/service.md @@ -0,0 +1,314 @@ +--- +title: "Service" +linkTitle: "Service" +weight: 2 +description: "Overview of Monolake's service architecture, factory patterns, and how they enable modular and composable network services" +--- + +## Services + +![image](monolake_service.jpeg) + +The Service pattern is a fundamental abstraction in network programming, popularized by the Tower library in the Rust ecosystem. At its core, a Service represents an asynchronous function that processes requests and returns responses. This pattern is particularly powerful for building networking applications as it enables: + +- **Composability**: Services can be layered and combined +- **Middleware**: Common functionality like timeout, rate limiting can be implemented as wrapper services +- **Protocol Agnosticism**: The pattern works across different protocols (HTTP, Thrift, etc.) +- **Testability**: Services can be easily mocked and tested in isolation + +## Improved Service Trait + +Tower's Service Trait + +```rust +pub trait Service { + type Response; + type Error; + type Future: Future>; + + // Required methods + fn poll_ready( + &mut self, + cx: &mut Context<'_>, + ) -> Poll>; + fn call(&mut self, req: Request) -> Self::Future; +} +``` +Monolake Service Trait + +```rust +pub trait Service { + /// Responses given by the service. + type Response; + /// Errors produced by the service. + type Error; + /// Process the request and return the response asynchronously. + fn call(&self, req: Request) -> impl Future>; +} +``` + +## New Service Trait + +Async Service trait + +```rust +impl tower::Service for SomeStruct +where + // ... +{ + type Response = // ...; + type Error = // ...; + type Future = Pin + Send + 'static>>; + + fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { + self.inner.poll_ready(cx) + } + + fn call(&mut self, req: Req) -> Self::Future { + let client = self.client.clone(); + Box::pin(async move { + client.get(req).await; + // ... + }) + } +} +``` + +Trait implementation + +```rust +impl Service for DelayService +where + T: Service, +{ + type Response = T::Response; + type Error = T::Error; + + async fn call(&self, req: R) -> Result { + monoio::time::sleep(self.delay).await; + self.inner.call(req).await + } +} +``` + +Tower framework's [Service trait](https://docs.rs/tower/latest/tower/trait.Service.html), while powerful, presents some challenges: + +1. Limited Capture Scope: As a future factory used serially and spawned for parallel execution, Tower's Service futures cannot capture &self or &mut self. This necessitates cloning and moving ownership into the future. + + +2. Complex Poll-Style Implementation: Tower's Service trait is defined in a poll-style, requiring manual state management. This often leads to verbose implementations using Box> to leverage async/await syntax. + +Monolake's [service_async](https://docs.rs/service-async/0.2.4/service_async/index.html) crate leverages [impl Trait](https://doc.rust-lang.org/reference/types/impl-trait.html) to introduce a new [Service trait](https://docs.rs/service-async/0.2.4/service_async/trait.Service.html), designed to simplify implementation and improve performance: + +1. Efficient Borrowing: By using impl Trait in the return position, futures can now capture &self, eliminating unnecessary cloning. + + +2. Zero-Cost Abstractions: Utilizing impl Trait instead of Box allows for more inline code optimization, especially for operations not crossing await points. + +## Service Factories & MakeService Trait + +In complex systems, creating and managing services often requires more flexibility than a simple constructor can provide. This is where the concept of Service factories comes into play. A Service factory is responsible for creating instances of services, potentially with complex initialization logic or state management. + +The [`MakeService`](https://docs.rs/service-async/0.2.4/service_async/trait.`MakeService`.html) trait is the cornerstone of our Service factory system. It provides a flexible way to construct service chains while allowing state migration from previous instances. This is particularly useful when services manage stateful resources like connection pools, and you need to update the service chain with new configurations while preserving existing resources. + +```rust +pub trait MakeService { + type Service; + type Error; + + fn make_via_ref(&self, old: Option<&Self::Service>) -> Result; + fn make(&self) -> Result { + self.make_via_ref(None) + } +} +``` + +Key features of ``MakeService``: + +- make_via_ref method allows creating a new service while optionally referencing an existing one. +- Enables state preservation and resource reuse between service instances. +- make method provides a convenient way to create a service without an existing reference. + +This approach allows for efficient updates to service chains, preserving valuable resources when reconfiguring services. + +## FactoryLayer & FactoryStack + +![image](monolake_factory_stack.png) + +To enable more complex service compositions, we introduce the FactoryLayer trait, which defines how to wrap one factory with another, creating a new composite factory. Factories can define a layer function that creates a factory wrapper, similar to the Tower framework's Layer but with a key distinction, our layer creates a Factory that wraps an inner Factory, which can then be used to construct the entire Service chain. + +[FactoryStack](https://docs.rs/service-async/0.2.4/service_async/stack/struct.FactoryStack.html) is a powerful abstraction that allows for the creation of complex service chains. It manages a stack of service factories, providing methods to push new layers onto the stack and to create services from the assembled stack.The FactoryStack works by composing multiple FactoryLayers together. Each layer in the stack wraps the layers below it, creating a nested structure of factories. When you call make or make_async on a FactoryStack, it traverses this structure from the outermost layer to the innermost, creating the complete service chain. + +## Service Lifecycle Management + +At the core of the threading model is the concept of a "worker" - a dedicated thread that is responsible for executing service-related tasks. The framework includes a centralized "worker manager" component that is responsible for spawning and coordinating these worker threads. +Service Lifecycle Management + +The monolake framework also introduces a sophisticated service lifecycle management system to handle the deployment, updating, and removal of network services. This system supports two primary deployment models: +1. Two-Stage Deployment: + - Staging: In this model, a new service instance is first "staged" or prepared, potentially reusing state from an existing service. This allows for careful validation and testing of the new service before deployment. + - Deployment: Once the new service is staged, it can be deployed to replace the existing service. This process ensures a smooth transition, minimizing downtime and preserving valuable state. +2. Single-Stage Deployment: + - In this simpler model, a new service is created and deployed in a single operation. While less complex, this approach does not provide the same level of state preservation and service continuity as the two-stage deployment model. + +The service lifecycle management system is designed to provide a high degree of control and flexibility over the deployment and updating of network services. This enables seamless service versioning, rolling updates, and state preservation, ensuring that the network services running on the monolake framework can be maintained and improved over time without disrupting the overall system's availability. + +## Putting it all together + +This example demonstrates the practical application of the `MakeService`, [FactoryLayer](https://docs.rs/service-async/0.2.4/service_async/layer/trait.FactoryLayer.html), and [FactoryStack](https://docs.rs/service-async/0.2.4/service_async/stack/struct.FactoryStack.html) concepts. It defines several services (SvcA and SvcB) and their corresponding factories. The FactoryStack is then used to compose these services in a layered manner. The Config struct provides initial configuration, which is passed through the layers. Finally, in the main function, a service stack is created, combining SvcAFactory and SvcBFactory. The resulting service is then called multiple times, showcasing how the chain of services handles requests and maintains state. + +```rust +use std::{ + convert::Infallible, + sync::atomic::{AtomicUsize, Ordering}, +}; + +use service_async::{ + layer::{layer_fn, FactoryLayer}, + stack::FactoryStack, + AsyncMakeService, BoxedMakeService, BoxedService, MakeService, Param, Service, +}; + +#[cfg(unix)] +use monoio::main as main_macro; +#[cfg(not(unix))] +use tokio::main as main_macro; + +// ===== Svc*(impl Service) and Svc*Factory(impl NewService) ===== + +struct SvcA { + pass_flag: bool, + not_pass_flag: bool, +} + +// Implement Service trait for SvcA +impl Service<()> for SvcA { + type Response = (); + type Error = Infallible; + + async fn call(&self, _req: ()) -> Result { + println!( + "SvcA called! pass_flag = {}, not_pass_flag = {}", + self.pass_flag, self.not_pass_flag + ); + Ok(()) + } +} + +struct SvcAFactory { + init_flag: InitFlag, +} + +struct InitFlag(bool); + +impl MakeService for SvcAFactory { + type Service = SvcA; + type Error = Infallible; + + fn make_via_ref(&self, old: Option<&Self::Service>) -> Result { + Ok(match old { + // SvcAFactory can access state from the older service + // which was created. + Some(r) => SvcA { + pass_flag: r.pass_flag, + not_pass_flag: self.init_flag.0, + }, + // There was no older service, so create SvcA from + // service factory config. + None => SvcA { + pass_flag: self.init_flag.0, + not_pass_flag: self.init_flag.0, + }, + }) + } +} + +struct SvcB { + counter: AtomicUsize, + inner: T, +} + +impl Service for SvcB +where + T: Service<(), Error = Infallible>, +{ + type Response = (); + type Error = Infallible; + + async fn call(&self, req: usize) -> Result { + let old = self.counter.fetch_add(req, Ordering::AcqRel); + let new = old + req; + println!("SvcB called! {old}->{new}"); + self.inner.call(()).await?; + Ok(()) + } +} + +struct SvcBFactory(T); + +impl MakeService for SvcBFactory +where + T: MakeService, +{ + type Service = SvcB; + type Error = Infallible; + + fn make_via_ref(&self, old: Option<&Self::Service>) -> Result { + Ok(match old { + Some(r) => SvcB { + counter: r.counter.load(Ordering::Acquire).into(), + inner: self.0.make_via_ref(Some(&r.inner))?, + }, + None => SvcB { + counter: 0.into(), + inner: self.0.make()?, + }, + }) + } +} + +// ===== impl layer fn for Factory instead of defining manually ===== + +impl SvcAFactory { + fn layer() -> impl FactoryLayer + where + C: Param, + { + layer_fn(|c: &C, ()| SvcAFactory { + init_flag: c.param(), + }) + } +} + +impl SvcBFactory { + fn layer() -> impl FactoryLayer { + layer_fn(|_: &C, inner| SvcBFactory(inner)) + } +} + + +// ===== Define Config and impl Param for it ===== +#[derive(Clone, Copy)] +struct Config { + init_flag: bool, +} + +impl Param for Config { + fn param(&self) -> InitFlag { + InitFlag(self.init_flag) + } +} + +#[main_macro] +async fn main() { + let config = Config { init_flag: false }; + let stack = FactoryStack::new(config) + .push(SvcAFactory::layer()) + .push(SvcBFactory::layer()); + + let svc = stack.make_async().await.unwrap(); + svc.call(1).await.unwrap(); + svc.call(2).await.unwrap(); + svc.call(3).await.unwrap(); +} +``` diff --git a/docs/config-guide/_index.md b/docs/config-guide/_index.md new file mode 100644 index 0000000..a383347 --- /dev/null +++ b/docs/config-guide/_index.md @@ -0,0 +1,132 @@ +--- +title: "Config Guide" +linkTitle: "Config Guide" +weight: 5 +description: "Comprehensive guide to configuring monolake proxy" +--- + +# Proxy Configuration Guide + +This document explains how to configure monolake proxy using a custom configuration file. It covers how to set up **HTTP**, **HTTPS**, and **Unix Domain Socket (UDS)** proxies, configure routing, and customize server settings. + +## Overview of the Configuration File + +The configuration file is divided into sections that define various proxy servers, runtime settings, and routes. Each server has specific configuration options, such as the listener type (socket, UNIX domain socket), upstream connections, and HTTP-related settings. + +### Configuration File Structure + +1. **Runtime Configuration (`[runtime]`)** + Defines the runtime environment and settings for the proxy service. + +2. **Server Configuration (`[servers.]`)** + Defines individual proxy servers. Each server can be an HTTP, HTTPS, or UDS proxy with customizable settings for listener type, upstream connections, and routing. + +3. **Routes (`[[servers..routes]]`)** + Specifies how incoming requests are routed to upstream endpoints. + +--- + +## 1. Runtime Configuration + +The `[runtime]` section configures global settings for the proxy service. Here are the key fields: + +```toml +[runtime] +runtime_type = "io_uring" # Type of runtime to use (e.g., legacy, io_uring) +worker_threads = 2 # Number of worker threads +entries = 1024 # Number of entries for io_uring +``` + +- **`runtime_type`**: Defines the type of runtime to use, such as `legacy` or `io_uring`. The choice of runtime impacts performance and system resources. +- **`worker_threads`**: Specifies the number of worker threads the proxy service will use. Increasing this number may improve handling of concurrent requests. +- **`entries`**: Sets the number of entries for `io_uring` (if used). This controls the number of concurrent I/O operations that can be managed. + +--- + +## 2. Server Configuration + +The `[servers]` section defines individual proxy servers. You can configure each proxy server's listener, upstream connection settings, and other specific options. + +### HTTP Proxy Configuration + +```toml +[servers.demo_http] +name = "monolake.rs" # Proxy name +listener = { type = "socket", value = "0.0.0.0:8080" } # Listener configuration +upstream_http_version = "http11" # HTTP version for upstream connections +http_opt_handlers = { content_handler = true } # Enable HTTP optional handlers +http_timeout = { server_keepalive_timeout_sec = 60, upstream_connect_timeout_sec = 2, upstream_read_timeout_sec = 2 } +``` + +- **`name`**: Specifies the name of the proxy server. This can be used to identify the server in logs or other configuration sections. +- **`listener`**: Defines where the server will listen for incoming connections. In this case, the server listens on `0.0.0.0:8080`. +- **`upstream_http_version`**: Sets the HTTP version used for connections to upstream servers. Here, it uses HTTP/1.1. Use **http2** for HTTP/2. +- **`http_opt_handlers`**: Enables or disables HTTP optional handlers. +- **`http_timeout`**: Configures various HTTP timeouts, such as: + - `server_keepalive_timeout_sec`: The timeout for keeping the connection alive with the client. + - `upstream_connect_timeout_sec`: The timeout for establishing a connection to upstream servers. + - `upstream_read_timeout_sec`: The timeout for reading data from upstream servers. + +### HTTPS Proxy Configuration + +```toml +[servers.demo_https] +name = "tls.monolake.rs" # Proxy name +listener = { type = "socket", value = "0.0.0.0:8081" } # Listener configuration +tls = { chain = "examples/certs/server.crt", key = "examples/certs/server.key" } +``` + +- **`tls`**: Specifies the TLS certificates required for HTTPS encryption. You must provide paths to the certificate chain (`server.crt`) and private key (`server.key`). + +### Unix Domain Socket (UDS) Proxy Configuration + +```toml +[servers.demo_uds] +name = "uds.monolake.rs" # Server name +listener = { type = "unix", value = "/tmp/monolake.sock" } # Listener configuration +``` + +- **`listener`**: This server listens on a Unix Domain Socket (`/tmp/monolake.sock`). UDS is useful for communication between processes on the same machine without using network protocols. + +--- + +## 3. Routing Configuration + +Each proxy server can have multiple routes configured to forward requests to upstream endpoints. + +### Example Routes for HTTP Proxy + +```toml +[[servers.demo_http.routes]] +path = '/' # Route path +upstreams = [{ endpoint = { type = "uri", value = "http://ifconfig.me/" } }] # Upstream endpoint + +[[servers.demo_http.routes]] +path = '/{*p}' # Wild card route path +upstreams = [ { endpoint = { type = "uri", value = "https://httpbin.org/xml" } } ] +``` + +- **`path`**: The route path for which this configuration applies. For example, the route `/` forwards requests to the endpoint `http://ifconfig.me/`. +- **`upstreams`**: A list of upstream endpoints. Each endpoint can be a URI (either HTTP or HTTPS) to which the proxy server forwards the request. +- **Wildcard Routes**: You can use `{*p}` as a wildcard to capture all paths and forward them to an endpoint. This is helpful when you want to handle a wide range of URLs. + +## 4. Applying the Configuration + +Monolake will **automatically detect changes** to the configuration file and apply the updated settings without needing to manually restart the service. + +### Key Behavior of the File Watcher: +- **Automatic Detection**: When the configuration file is **replaced** (e.g., the old config file is replaced with a new one), the file watcher will automatically detect the change. +- **Graceful Transition for Active Connections**: + - If the new configuration updates an existing proxy service, **any existing connections** (those established before the update) will continue to use the old configuration settings. + - **New connections** (those established after the configuration change) will use the **latest configuration**. +- This ensures that the service remains stable for active users while applying the updated configuration for all new users. + +### Steps: +1. **Replace the Configuration File**: Replace the current configuration file with the new version containing your desired changes (e.g., new routes, updated listener settings, or updated certificates). +2. **File Watcher Detection**: The file watcher will automatically detect the replacement and apply the new configuration to the proxy service. +3. **Automatic Application**: The updated configuration is applied to any new incoming connections. Existing connections continue using the configuration that was active when they were established. +4. **Verify**: Check the proxy service logs or metrics to confirm that the new configuration is being applied to new connections, while existing connections are unaffected. + +This approach allows for **seamless updates** to the proxy service, minimizing downtime and ensuring that changes are immediately reflected for new connections without disrupting active ones. + +--- diff --git a/docs/faq/_index.md b/docs/faq/_index.md new file mode 100644 index 0000000..639b2d2 --- /dev/null +++ b/docs/faq/_index.md @@ -0,0 +1,16 @@ +--- +title: "FAQ" +linkTitle: "FAQ" +weight: 7 +keywords: ["Monolake", "HTTP", "Proxy", "Q&A"] +description: "Monolake Frequently Asked Questions and Answers." +--- + +## Monolake + +**Q1: Can you run monolake on Mac OS?** +* Yes, monolake will default to kqueue instead of io-uring on MacOS. + +**Q2: Does Monolake support HTTP2?** +* Yes, monolake supports HTTP2 on the downstream(client to proxy) connection +* Monolake defaults to HTTP1_1 on the upstream(proxy to server) connection with future support for HTTP2 planned diff --git a/docs/getting-started/_index.md b/docs/getting-started/_index.md new file mode 100644 index 0000000..fb5e2f1 --- /dev/null +++ b/docs/getting-started/_index.md @@ -0,0 +1,75 @@ +--- +title: "Getting Started" +linkTitle: "Getting Started" +weight: 2 +keywords: ["Monolake", "Rust", "Proxy", "Getting Started"] +description: "This page provides a quick start guide for setting up and running Monolake proxy" +--- + +## Prerequisites + +- **Linux Kernel Support**: io_uring requires linux kernel support. Generally, kernel versions 5.1 and above provide the necessary support. Ensure that your target system has an appropriate kernel version installed. Monolake will fall back to epoll on Mac OS. +- **Rust nightly**: See the "Rust installation section" below + +## Quick Start + +This chapter will get you started with Monolake proxy using a simple example config. + +### Rust installation + +To download and install Rust, and set `rustup` to nightly, you can follow these instructions: + +1. Download and install Rust by running the following command in your terminal: + + ```markup + $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + ``` + + This command will download the Rust installation script and initiate the installation process. + +3. Close the terminal and open a new terminal window to ensure the changes take effect. + +4. Set `rustup` to nightly by running the following command: + + ```markup + $ rustup default nightly + ``` + + This command sets the default Rust toolchain to the nightly version. + +5. Verify the installation and check the Rust version by running: + + ```markup + $ rustc --version + ``` + + This command will display the installed Rust version, which should now be set to nightly. + +### Build Monolake + +1. Clone the monolake repository `git clone https://github.com/cloudwego/monolake.git`. +2. Build the binary: + + ```markup + $ cargo build --release + ``` +3. Generate the certificates for the example: + ```markup + $ sh examples/gen_cert.sh + ``` +### Run the example + 1. Start the proxy + ```markup + $ ./target/release/monolake -c examples/config.toml + ``` + 2. Send a request to the HTTP proxy. + + ```markup + $ curl -vvv http://127.0.0.1:8080/ + ``` + 4. Send a request to the HTTPS proxy. + + ```markup + $ curl --resolve gateway.monolake.rs:8081:127.0.0.1 --cacert examples/certs/rootCA.crt -vvv https://gateway.monolake.rs:8081/ + ``` + \ No newline at end of file diff --git a/docs/glossary/_index.md b/docs/glossary/_index.md new file mode 100644 index 0000000..4a03197 --- /dev/null +++ b/docs/glossary/_index.md @@ -0,0 +1,29 @@ +--- +title: "Glossary" +linkTitle: "Glossary" +weight: 6 +keywords: ["Monolake", "Rust", "Proxy", "Glossary"] +--- + +## Glossary + +| Term | Description | +|---------------------|-------------| +| Monolake | Framework for developing high-performance network services like proxies and gateways | +| Monolake Proxy | A reference HTTP proxy implementation using monolake | +| Monoio | The async runtime used by the framework, providing efficient I/O operations based on io_uring | +| Foundational Crates | Crates that provide low-level, core functionality and building blocks for the monolake ecosystem. Examples: monoio-transports, service-async, certain-map. | +| Framework Crates | Crates that represent the higher-level, user-facing parts of the monolake framework. Examples: monolake-core, monolake-service. | +| monolake-core | Foundational crate that provides a robust framework for worker orchestration, service deployment, and lifecycle management. | +| monolake-service | Foundational crate that provides a collection of services for building high-performance, modular HTTP servers and Thrift services. | +| service-async | A foundational crate that introduces a refined Service trait with efficient borrowing and zero-cost abstractions, as well as utilities for service composition and state management. | +| Service | A modular component that provides a specific functionality. Defined using the Service trait from the service-async crate. | +| Service Chain | A composition of multiple services, where the output of one service is the input of the next. Enabled by the FactoryStack and FactoryLayer from service-async. | +| Service Factory | A component that is responsible for creating and managing instances of services. | +| FactoryLayer | A trait from the service-async crate that defines how to wrap one factory with another, creating a new composite factory. | +| FactoryStack | An abstraction from the service-async crate that manages a stack of service factories, enabling the creation of complex service chains. | +| MakeService | A trait implemented by service factories to create instances of services that implement the Service trait. | +| AsyncMakeService | An asynchronous version of the MakeService trait, allowing for more complex service composition. | +| certain-map | A foundational crate that provides a typed map data structure, ensuring the existence of specific items at compile-time, useful for managing data dependencies between services. | +| monoio-transports | A foundational crate that provides high-performance, modular networking capabilities, including connectors and utilities for efficient network communications. | +| Connector Trait | A trait defined in the monoio-transports crate that provides a common interface for establishing network connections, allowing for modular and composable network communication solutions. | \ No newline at end of file diff --git a/docs/overview/_index.md b/docs/overview/_index.md new file mode 100644 index 0000000..fa5d5a7 --- /dev/null +++ b/docs/overview/_index.md @@ -0,0 +1,63 @@ +--- +title: "Overview" +linkTitle: "Overview" +weight: 1 +keywords: ["Proxy", "Rust", "io-uring"] +--- + +## Monolake + +Monolake is a framework for developing high-performance network services like proxies and gateways. It is built from the ground up as a blank slate design, starting with a custom async runtime called [Monoio](https://docs.rs/crate/monoio/latest) that has first-class support for the io_uring Linux kernel feature. + +While the most widely used Rust async runtime is [Tokio](https://docs.rs/tokio/latest/tokio/), which is an excellent and high-performance epoll/kqueue-based runtime, Monolake takes a different approach. The monoio runtime developed by Bytedance is designed with a thread-per-core model in mind, allowing Monolake to extract maximum performance from io_uring's highly efficient asynchronous I/O operations. + +By building Monolake on this novel runtime foundation, the team was able to incorporate new first-class support for io_uring throughout the ecosystem. This includes io_uring-specific IO traits and a unique service architecture that differs from the popular Tower implementation. Monolake also includes io_uring-optimized implementations for protocols like Thrift and HTTP. + +The Monolake framework has been used to build various high-performance proxies and gateways, and it is actively deployed in production at ByteDance. Its use cases are wide-ranging and include: + +- Application Gateways: For protocol conversion, such as HTTP to Thrift +- Security Gateways: Providing pseudonymization for gRPC and Thrift RPCs +- Service ingress controller: Serves as ingress controller for FaaS services + +## Monolake Proxy + +[Monolake Proxy](https://github.com/cloudwego/monolake/tree/main/monolake) is a reference implementation that leverages the various components within the Monolake framework to build a high-performance HTTP and Thrift proxy. This project serves as a showcase for the unique features and capabilities of the Monolake ecosystem. By utilizing the efficient networking capabilities of the monoio-transports crate, the modular service composition of service-async, and the type-safe context management provided by certain-map, Monolake Proxy demonstrates the practical application of the Monolake framework. Additionally, this reference implementation allows for the collection of benchmarks, enabling comparisons against other popular proxy solutions like Nginx and Envoy. + +## Performance + +### Test environment + +- AWS instance: c6a.8xlarge +- CPU: AMD EPYC 7R13 Processo, 16 cores, 32 threads +- Memory: 64GB +- OS: 6.1.94-99.176.amzn2023.x86_64, Amazon Linux 2023.5.20240805 +- Nginx: nginx/1.24.0 + +#### Request Per Second (RPS) vs. Body Size +| HTTPS | HTTP | +| :------------------------------------------------- | :-------------------------------------------------: | +| ![image](https_req_per_sec_vs_body_size.png) | ![image](http_req_per_sec_vs_body_size.png) | + +### Concurrency performance +| HTTPS | HTTP | +| :------------------------------------------------- | :-------------------------------------------------: | +| ![image](https_req_per_sec_vs_worker_threads.png) | ![image](http_req_per_sec_vs_worker_threads.png) | + +## Related Projects + +- [Monoio](https://github.com/bytedance/monoio): A high-performance thread-per-core io_uring based async runtime + +## Related Crates + + + +| Crate | Description | +|-------|-------------| +| [monoio-transports](https://crates.io/crates/monoio-transports) | A foundational crate that provides high-performance, modular networking capabilities, including connectors and utilities for efficient network communications | +| [service-async](https://crates.io/crates/service-async) | A foundational crate that introduces a refined Service trait with efficient borrowing and zero-cost abstractions, as well as utilities for service composition and state management | +| [certain-map](https://crates.io/crates/certain-map) | A foundational crate that provides a typed map data structure, ensuring the existence of specific items at compile-time, useful for managing data dependencies between services | +| [monoio-thrift](https://crates.io/crates/monoio-thrift) | Monoio native, io_uring compatible thrift implementation | +| [monoio-http](https://crates.io/crates/monoio-http) | Monoio native, io_uring compatible HTTP/1.1 and HTTP/2 implementation | +| [monoio-nativetls](https://crates.io/crates/monoio-native-tls) | The native-tls implementation compatible with monoio | +| [monoio-rustls](https://crates.io/crates/monoio-rustls) | The rustls implementation compatible with monoio | + diff --git a/docs/overview/http_req_per_sec_vs_body_size.png b/docs/overview/http_req_per_sec_vs_body_size.png new file mode 100644 index 0000000..7a7593e Binary files /dev/null and b/docs/overview/http_req_per_sec_vs_body_size.png differ diff --git a/docs/overview/http_req_per_sec_vs_worker_threads.png b/docs/overview/http_req_per_sec_vs_worker_threads.png new file mode 100644 index 0000000..ef8d5c5 Binary files /dev/null and b/docs/overview/http_req_per_sec_vs_worker_threads.png differ diff --git a/docs/overview/https_req_per_sec_vs_body_size.png b/docs/overview/https_req_per_sec_vs_body_size.png new file mode 100644 index 0000000..ad9276a Binary files /dev/null and b/docs/overview/https_req_per_sec_vs_body_size.png differ diff --git a/docs/overview/https_req_per_sec_vs_worker_threads.png b/docs/overview/https_req_per_sec_vs_worker_threads.png new file mode 100644 index 0000000..a5ab5aa Binary files /dev/null and b/docs/overview/https_req_per_sec_vs_worker_threads.png differ diff --git a/docs/overview/monolake_crates.png b/docs/overview/monolake_crates.png new file mode 100644 index 0000000..a14aef0 Binary files /dev/null and b/docs/overview/monolake_crates.png differ diff --git a/docs/tutorial/_index.md b/docs/tutorial/_index.md new file mode 100644 index 0000000..0e2dbe1 --- /dev/null +++ b/docs/tutorial/_index.md @@ -0,0 +1,8 @@ +--- +title: "Tutorial" +linkTitle: "Tutorial" +weight: 4 +keywords: ["Proxy", "Rust", "io_uring", "Architecture"] +--- + +In this guide, we'll walk through the process of implementing an HTTP routing service using the `monolake-services` crate. This service will handle incoming HTTP requests and route them to appropriate upstream servers based on the configured routes and their associated endpoints. \ No newline at end of file diff --git a/docs/tutorial/code.md b/docs/tutorial/code.md new file mode 100644 index 0000000..14e5c1d --- /dev/null +++ b/docs/tutorial/code.md @@ -0,0 +1,114 @@ +--- +title: "Putting it all together" +linkTitle: "Putting it all together" +weight: 3 +--- + +## Putting it all together + +```rust +use service_async::{ + layer::{layer_fn, FactoryLayer}, AsyncMakeService, MakeService, Param, ParamMaybeRef, ParamRef, Service +}; + +#[derive(Clone)] +pub struct RoutingHandler { + inner: H, + router: Router, +} + +impl Service<(Request, CX)> for RoutingHandler +where + CX: ParamRef, + H: HttpHandler, + H::Body: FixedBody, +{ + type Response = ResponseWithContinue; + type Error = H::Error; + + async fn call( + &self, + (mut request, ctx): (Request, CX), + ) -> Result { + let req_path = request.uri().path(); + tracing::info!("request path: {req_path}"); + + let peer_addr = ParamRef::::param_ref(&ctx); + tracing::info!("Peer Addr: {:?}", peer_addr); + + match self.router.at(req_path) { + Ok(route) => { + let route = route.value; + tracing::info!("the route id: {}", route.id); + use rand::seq::SliceRandom; + let upstream = route + .upstreams + .choose(&mut rand::thread_rng()) + .expect("empty upstream list"); + + rewrite_request(&mut request, upstream); + + self.inner.handle(request, ctx).await + } + Err(e) => { + debug!("match request uri: {} with error: {e}", request.uri()); + Ok((generate_response(StatusCode::NOT_FOUND, false), true)) + } + } + } +} + + +pub struct RoutingHandlerFactory { + inner: F, + routes: Vec, +} + +#[derive(thiserror::Error, Debug)] +pub enum RoutingFactoryError { + #[error("inner error: {0:?}")] + Inner(E), + #[error("empty upstream")] + EmptyUpstream, + #[error("router error: {0:?}")] + Router(#[from] matchit::InsertError), +} + +impl MakeService for RoutingHandlerFactory { + type Service = RoutingHandler; + type Error = RoutingFactoryError; + + fn make_via_ref(&self, old: Option<&Self::Service>) -> Result { + let mut router: Router = Router::new(); + for route in self.routes.iter() { + router.insert(&route.path, route.clone())?; + if route.upstreams.is_empty() { + return Err(RoutingFactoryError::EmptyUpstream); + } + } + Ok(RoutingHandler { + inner: self + .inner + .make_via_ref(old.map(|o| &o.inner)) + .map_err(RoutingFactoryError::Inner)?, + router, + }) + } +} + +impl RoutingHandler { + pub fn layer() -> impl FactoryLayer> + where + C: Param>, + { + layer_fn(|c: &C, inner| { + let routes = c.param(); + RoutingHandlerFactory { inner, routes } + }) + } +} + +fn rewrite_request(request: &mut Request, upstream: &Upstream) { + // URI rewrite logic +} +``` diff --git a/docs/tutorial/config.md b/docs/tutorial/config.md new file mode 100644 index 0000000..6507e68 --- /dev/null +++ b/docs/tutorial/config.md @@ -0,0 +1,58 @@ +--- +title: "Config & context management" +linkTitle: "Config & context management" +weight: 1 +description: "This doc covers how to manage configuration and context" +--- + +## Configuration Management + +When building a service-oriented application using the `monolake-services` crate, you'll need to define the necessary configuration fields in your main `ServerConfig` struct. These fields will be used by the service factories to construct the services that make up your application. + +The specific fields you'll need to add will depend on the services you're using. For example, if you're implementing a routing HTTP service, you'll probably want to add a routes field to hold the routing configuration. + +To configure the routing service, you'll need to add the routes field to your `ServerConfig` struct. This field will hold the RouteConfig structures that define the routing rules. +```rust +pub struct ServerConfig { + pub name: String, + // ... other config fields, used by other services + pub routes: Vec, +} +``` + +When creating the [FactoryStack](https://docs.rs/service-async/0.2.4/service_async/stack/struct.FactoryStack.html) to build your service chain, you'll need to ensure that the ServerConfig struct is used as the configuration parameter. To do this, you'll need to implement the Param trait for the fields that the RoutingHandlerFactory expects to access, in this case, the Vec. + +```rust +impl Param> for ServerConfig { + fn param(&self) -> Vec { + self.routes.clone() + } +} +``` + +By implementing this Param trait, you're ensuring that the necessary configuration data, specifically the routes field, is available to the `RoutingHandlerFactory` when it constructs the `RoutingHandler` service. + +This approach applies to any service you're implementing using the monolake-services crate. You'll need to add the required configuration fields to your ServerConfig struct and implement the appropriate Param traits to make the data accessible to the service factories. + +## Context Management + +Before creating the RoutingHandler service, you need to define the request context using the [certain_map](https://docs.rs/certain-map/latest/certain_map/) crate. This context will hold the data that the RoutingHandler expects to be available, such as the peer address. + +The certain_map crate provides a way to define a typed map that ensures the existence of specific items at compile-time. This is particularly useful when working with service-oriented architectures, where different services may depend on certain pieces of information being available in the request context. + +```rust +certain_map::certain_map! { + #[derive(Debug, Clone)] + #[empty(EmptyContext)] + #[full(FullContext)] + pub struct Context { + peer_addr: PeerAddr, + } +} +``` + +In this example, the Context struct has a single field: peer_addr of type `PeerAddr`. It's important to note that the fields in the Context struct should correspond to the data that the `RoutingHandler` service expects to be available. In this case, the RoutingHandler requires the peer_addr information to be set in the context. + +By defining the Context using the certain_map crate, you can ensure that the necessary data is available at compile-time, preventing runtime errors and simplifying the implementation of your services. + +In this example, we assume that some other service in the service chain, such as the ContextService, is responsible for setting the peer_addr field in the Context. The RoutingHandler will then rely on this information being available when it is called. diff --git a/docs/tutorial/service.md b/docs/tutorial/service.md new file mode 100644 index 0000000..083d00c --- /dev/null +++ b/docs/tutorial/service.md @@ -0,0 +1,101 @@ +--- +title: "Creating Service and Factory" +linkTitle: "Creating Service and Factory" +weight: 2 +--- + +## Defining the Service and Factory +The `RoutingHandlerFactory` is responsible for creating and updating the `RoutingHandler` service instances. This factory implements the AsyncMakeService trait, allowing it to be used in a FactoryStack for service composition. + +Let's start by defining the `RoutingHandler` service itself: +```rust +pub struct RoutingHandler { + inner: H, + router: Router, +} +``` +The `RoutingHandler` is responsible for matching incoming request paths against a set of predefined routes, selecting an appropriate upstream server, and forwarding the request to that server. It contains two fields: + 1. inner: The inner handler that processes requests after routing. + 2. router: A `matchit::Router` containing the routing configuration. + +Now, let's look at the implementation of the RoutingHandlerFactory: +```rust +use monolake_services::http::handlers::route::{RoutingHandlerFactory, RouteConfig}; +use service_async::{Param}; + +impl AsyncMakeService for RoutingHandlerFactory +where + F::Error: Into, +{ + type Service = RoutingHandler; + type Error = RoutingFactoryError; + + async fn make_via_ref( + &self, + old: Option<&Self::Service>, + ) -> Result { + let mut router: Router = Router::new(); + for route in self.routes.iter() { + router.insert(&route.path, route.clone())?; + if route.upstreams.is_empty() { + return Err(RoutingFactoryError::EmptyUpstream); + } + } + Ok(RoutingHandler { + inner: self.inner.make_via_ref(old.map(|o| &o.inner)).await?, + router, + }) + } +} +``` + +In this implementation, the RoutingHandlerFactory takes two parameters: +1. inner: This is the inner service factory that the RoutingHandler will use to handle the requests after routing. +2. routes: This is the vector of `RouteConfig` instances that define the routing rules. + +The AsyncMakeService implementation for the RoutingHandlerFactory defines how to create a new RoutingHandler instance. It first creates a Router from the configured RouteConfig instances, and then creates the RoutingHandler by calling the make_via_ref method in the inner service factory. + +Note that in this case, we don't rely on any state from the previous RoutingHandler instance, as the routing configuration is fully defined by the RouteConfig instances. If the inner service factory had some stateful resources (like a connection pool) that needed to be preserved, we could clone those resources when creating the new RoutingHandler. For a more detailed example involving resource transfer, see [UpstreamHandler](https://github.com/cloudwego/monolake/blob/fd2cbe1a8708c379d6355b3cc979540ec49fdb4f/monolake-services/src/http/handlers/upstream.rs#L338), which involves transfer of a HTTP connection pool from the previous UpstreamHandler instance. + +To integrate the `RoutingHandler` into a service stack, we can use the layer function provided by the `RoutingHandler` type: + +```rust +use monolake_services::http::handlers::route::RoutingHandler; +use service_async::{layer::FactoryLayer}; + +impl RoutingHandler { + pub fn layer() -> impl FactoryLayer> + where + C: Param>, + { + service_async::layer::layer_fn(|c: &C, inner| { + RoutingHandlerFactory::new(c.param(), inner) + }) + } +} +``` + +The layer function creates a FactoryLayer that can be used in a FactoryStack to add the RoutingHandler to the service chain. The FactoryLayer trait is a key component of the service_async crate, allowing you to wrap and compose service factories in a modular and extensible way. + +In this implementation, the layer function takes a configuration parameter C that implements the Param> trait. This ensures that the necessary routing configuration is available when creating the RoutingHandlerFactory. The layer function then creates the RoutingHandlerFactory by passing the Vec and the inner service factory to the RoutingHandlerFactory::new function. + + +## Adding the FactoryLayer in the FactoryStack + +Finally, to integrate the RoutingHandler into a service stack, you can use the FactoryStack and the RoutingHandler::layer function: + +```rust +use monolake_services::http::handlers::{ + route::RoutingHandler, + ContentHandler, ConnectionPersistenceHandler, UpstreamHandler, +}; +use service_async::{layer::FactoryLayer, stack::FactoryStack, Param}; + +let stacks = FactoryStack::new(config) + .replace(UpstreamHandler::factory(Default::default())) + .push(ContentHandler::layer()) + .push(RoutingHandler::layer()) + .push(ConnectionPersistenceHandler::layer()); +``` + +In this example, we create a FactoryStack and add the RoutingHandler::layer to the stack, along with other handlers like ContentHandler and ConnectionPersistenceHandler. The FactoryStack will compose these layers into a complete service chain, allowing the RoutingHandler to be integrated seamlessly. \ No newline at end of file