From 1701c75ee4a695f0473edc87cffa9650f0140254 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mateusz=20S=C5=82uszniak?= Date: Tue, 3 Feb 2026 19:11:57 +0100 Subject: [PATCH 1/2] docs: Add subsection about adding new models to RNE ecosystem --- docs/docs/01-fundamentals/01-getting-started.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/docs/01-fundamentals/01-getting-started.md b/docs/docs/01-fundamentals/01-getting-started.md index 9c70651b0..7c6294f2e 100644 --- a/docs/docs/01-fundamentals/01-getting-started.md +++ b/docs/docs/01-fundamentals/01-getting-started.md @@ -90,6 +90,10 @@ Running the app with the library: yarn run expo: -d ``` +## Supporting new models in React Native ExecuTorch + +The process of adding new functionality in our library is pretty much the same for every functionality. Firstly, we try to export PyTorch models that implement given task e.g. object detection into `*.pte` format. This a format supported by ExecuTorch runtime to call these models directly in C++. When we have such file, we implement code in C++ that utilizes ExecuTorch runtime and call inference in C++. Additionally, we implement other functions like preprocessing or postprocessing of data. Final step is implementing TypeScript API that will expose way to interoperate with previously written C++ code. + ## Good reads If you want to dive deeper into ExecuTorch or our previous work with the framework, we highly encourage you to check out the following resources: From bc0fef1307a9430de5ab16a019f82752ef9baaab Mon Sep 17 00:00:00 2001 From: Mateusz Sluszniak <56299341+msluszniak@users.noreply.github.com> Date: Wed, 4 Feb 2026 11:40:54 +0100 Subject: [PATCH 2/2] Update docs/docs/01-fundamentals/01-getting-started.md --- docs/docs/01-fundamentals/01-getting-started.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/docs/01-fundamentals/01-getting-started.md b/docs/docs/01-fundamentals/01-getting-started.md index 7c6294f2e..b54efba66 100644 --- a/docs/docs/01-fundamentals/01-getting-started.md +++ b/docs/docs/01-fundamentals/01-getting-started.md @@ -92,7 +92,13 @@ yarn run expo: -d ## Supporting new models in React Native ExecuTorch -The process of adding new functionality in our library is pretty much the same for every functionality. Firstly, we try to export PyTorch models that implement given task e.g. object detection into `*.pte` format. This a format supported by ExecuTorch runtime to call these models directly in C++. When we have such file, we implement code in C++ that utilizes ExecuTorch runtime and call inference in C++. Additionally, we implement other functions like preprocessing or postprocessing of data. Final step is implementing TypeScript API that will expose way to interoperate with previously written C++ code. +Adding new functionality to the library follows a consistent three-step integration pipeline: + +1. **Model Serialization:** We export PyTorch models for specific tasks (e.g., object detection) into the *.pte format, which is optimized for the ExecuTorch runtime. + +2. **Native Implementation:** We develop a C++ execution layer that interfaces with the ExecuTorch runtime to handle inference. This layer also manages model-dependent logic, such as data pre-processing and post-processing. + +3. **TS Bindings:** Finally, we implement a TypeScript API that bridges the JavaScript environment to the native C++ logic, providing a clean, typed interface for the end user." ## Good reads