Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
b3dcfff
Technical review of NXP with ExecuTorch LP
annietllnd Jan 27, 2026
76a1469
Update page 9: Build executor_runner firmware with improved formattin…
fidel-makatia Jan 31, 2026
1418abf
Add images for page 9: MCUXpresso installer, import repo, import proj…
fidel-makatia Jan 31, 2026
d22348d
Continued technical review of NXP LP
annietllnd Feb 4, 2026
48dd8a1
initial draft
Feb 16, 2026
3779e45
final tidy before PR
Feb 17, 2026
c0dde9c
fix - renamed file to lowercase to pass test
kieranhejmadi01 Feb 18, 2026
1f0d460
Fix - replaced old naming with new branding
kieranhejmadi01 Mar 2, 2026
9c88177
fix - renamed directory to match new branding
Mar 2, 2026
68d1d8a
updated config with code stacks disabled
Mar 2, 2026
894045b
A start on a Performix learning path for Topdown recipe
bccbrendan Mar 4, 2026
5e4d86b
inline images
bccbrendan Mar 4, 2026
7c141fc
Add Instruction Mix page showing improvements
bccbrendan Mar 4, 2026
035f33d
Optimize with compiler flags
bccbrendan Mar 5, 2026
7acade6
fix arrow directions
bccbrendan Mar 5, 2026
05bce1a
Plot performance improvement
bccbrendan Mar 5, 2026
a23890e
link to cpu hotspots and mandelbrot vectorized branch
bccbrendan Mar 5, 2026
8b54ec6
Copy edit for spelling, tense, active voice
bccbrendan Mar 5, 2026
f57ba69
co-author Kieran
bccbrendan Mar 5, 2026
1cfe5e7
add self to contributors.csv
bccbrendan Mar 5, 2026
7713faa
Fix issues seen in 'hugo server' spot check
bccbrendan Mar 5, 2026
1e82cb7
Fix formatting per style guide
bccbrendan Mar 5, 2026
4f9a377
Merge branch 'main' into main
bccbrendan Mar 5, 2026
a89f1a0
topdown shows high SIMD utilization
bccbrendan Mar 5, 2026
3464fc8
remove unused assets
bccbrendan Mar 5, 2026
3baedf9
better zoom on screenshots
bccbrendan Mar 5, 2026
c1d694e
Add Runbook tag for discoverability from Performix
bccbrendan Mar 6, 2026
858d0ff
start renaming topdown -> cpu microarchitecture
bccbrendan Mar 12, 2026
f130597
Rename topdown -> cpu microarchitecture
bccbrendan Mar 12, 2026
94726d6
New LP with AI agents using Strands and Device Connect
annietllnd Mar 12, 2026
a1a7428
Swap order of options
annietllnd Mar 13, 2026
fbf287c
Add target approach to local example
annietllnd Mar 13, 2026
153dcba
Update ExecuTorch on NXP FRDM i.MX 93 learning path
fidel-makatia Mar 14, 2026
2f5204c
Merge branch 'main' into main
pareenaverma Mar 16, 2026
04c96b6
Update with single repo approach
annietllnd Mar 16, 2026
9108fda
Merge pull request #63 from fidel-makatia/fidel-makatia/main
annietllnd Mar 16, 2026
a0653d1
Update repo name
annietllnd Mar 16, 2026
eb69a7a
Final touches
annietllnd Mar 16, 2026
4ff17dd
Remove draft status for internal review
annietllnd Mar 16, 2026
95dffe4
Merge pull request #2983 from annietllnd/gtc-strands
pareenaverma Mar 16, 2026
696f8ec
Fix typos and improve clarity in documentation across multiple files
madeline-underwood Mar 16, 2026
b34fa2d
Normalize section headings to lowercase for consistency in documentation
madeline-underwood Mar 16, 2026
6447a42
Refine documentation for Device Connect and Strands: enhance clarity,…
madeline-underwood Mar 16, 2026
696e337
Technical review pt2. of NXP and ExecuTorch LP
annietllnd Mar 16, 2026
899ebbb
Add Fidel's socials
annietllnd Mar 16, 2026
8d915f0
Merge pull request #2997 from annietllnd/fidel-makatia/main
pareenaverma Mar 16, 2026
6494adf
Add instructions for prompt file in gemini cli install guide
jaidev17 Mar 16, 2026
9885d1e
Refine documentation for ExecuTorch learning path: improve clarity an…
madeline-underwood Mar 17, 2026
6f42228
Update titles for clarity in ExecuTorch learning path documentation
madeline-underwood Mar 17, 2026
6aae08e
Remove draft status and improve formatting in ExecuTorch learning pat…
madeline-underwood Mar 17, 2026
6f7caa8
Add introductory section to overview for clarity on prerequisites
madeline-underwood Mar 17, 2026
5a603f2
Enhance documentation clarity: add section headers for deployment ove…
madeline-underwood Mar 17, 2026
7796664
Refine documentation for ExecuTorch learning path: clarify the milest…
madeline-underwood Mar 17, 2026
ee0e347
Refine documentation for NXP FRDM i.MX 93: improve clarity in connect…
madeline-underwood Mar 17, 2026
8245bbe
Merge pull request #2996 from madeline-underwood/strand
pareenaverma Mar 17, 2026
c392406
Tech review image classification on Ethos-U85
jasonrandrews Mar 17, 2026
58421bf
Merge pull request #3000 from jasonrandrews/review3
jasonrandrews Mar 17, 2026
8c67e16
Merge pull request #2961 from bccbrendan/main
jasonrandrews Mar 17, 2026
370e93f
Disable unstable maintenance tests for CI
jaidev17 Mar 17, 2026
9c7067c
Add prompt-first guidance to Kiro, Copilot, and Codex guides
jaidev17 Mar 17, 2026
fc2d2f6
Update _index.md to set draft status
pareenaverma Mar 17, 2026
2495d9d
Merge pull request #2901 from kieranhejmadi01/cpu_hotspot_getting_sta…
pareenaverma Mar 17, 2026
fc870c1
Merge pull request #2999 from madeline-underwood/nxp2
pareenaverma Mar 18, 2026
7d93f12
Update codex-cli.md
pareenaverma Mar 18, 2026
0612a4e
Refine language for clarity in codex-cli.md
pareenaverma Mar 18, 2026
7a3716f
Clarify usage of Arm MCP Server with prompt files
pareenaverma Mar 18, 2026
9ae6ce2
Improve clarity of Arm MCP Server documentation
pareenaverma Mar 18, 2026
d60ba72
Improve clarity of Arm MCP Server usage instructions
pareenaverma Mar 18, 2026
a9ff60c
Merge pull request #2998 from jaidev17/agent-prompt-files-content
pareenaverma Mar 18, 2026
f8eaf8c
Merge pull request #3001 from jaidev17/test-pull-request-error
pareenaverma Mar 18, 2026
09e4dc3
dynatrace tech review and copilot review
DougAnsonAustinTX Mar 18, 2026
0ee1445
first tech review of CPU microarchitecture
jasonrandrews Mar 18, 2026
1d62153
lowercase on image
jasonrandrews Mar 18, 2026
9acab46
Merge pull request #3004 from jasonrandrews/review3
jasonrandrews Mar 18, 2026
c12e4b3
Merge pull request #3003 from DougAnsonAustinTX/dynatrace-techreview-1
pareenaverma Mar 18, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions assets/contributors.csv
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheed
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
Ken Zhang,Insyde,,kai-di-zhang-b1642a266,,
Ann Cheng,Arm,anncheng-arm,hello-ann,,
Fidel Makatia Omusilibwa,,,,,
Fidel Makatia Omusilibwa,,fidel-makatia,fidel-makatia-hsc-mieee,,
Ker Liu,,,,,
Rui Chang,,,,,
Alejandro Martinez Vicente,Arm,,,,
Expand All @@ -113,5 +113,6 @@ Steve Suzuki,Arm,,,,
Qixiang Xu,Arm,,,,
Phalani Paladugu,Arm,phalani-paladugu,phalani-paladugu,,
Richard Burton,Arm,Burton2000,,,
Brendan Long,Arm,bccbrendan,https://www.linkedin.com/in/brendan-long-5817924/,,
Asier Arranz,NVIDIA,,asierarranz,,asierarranz.com
Prince Agyeman,Arm,,,,
Prince Agyeman,Arm,,,,
14 changes: 13 additions & 1 deletion content/install-guides/codex-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,6 +233,18 @@ The Arm MCP server is listed in the output. If the arm-mcp server indicates it's

You can also verify the tools are available by asking Codex to list the available Arm MCP tools.

### Use Arm prompt files with the MCP Server

The Arm MCP Server provides a rich set of tools and knowledge base, but to make the best use of it, you should pair it with Arm-specific prompt files. These prompt files supply task-oriented context, best practices, and structured workflows that guide the agent in using MCP tools more effectively across common Arm development tasks.

#### Get the prompt files

Browse the [agent integrations directory for Codex](https://github.com/arm/mcp/tree/main/agent-integrations/codex) to find prompt files for specific use cases:

- **Arm migration** ([arm-migration.md](https://github.com/arm/mcp/blob/main/agent-integrations/codex/arm-migration.md)): Helps the agent systematically migrate applications from x86 to Arm, including dependency analysis, compatibility checks, and optimization recommendations.

Each prompt file is a Markdown configuration that you can reference in your Codex CLI sessions to enable more targeted, task-specific assistance.

If you're facing issues or have questions, reach out to mcpserver@arm.com.

You're now ready to use Codex CLI with Arm-specific development assistance.
You're now ready to use Codex CLI with the Arm MCP server for Arm-specific development assistance.
14 changes: 13 additions & 1 deletion content/install-guides/gemini.md
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,18 @@ Configured MCP servers:
- sysreport_instructions
```

### Use Arm prompt files with the MCP Server

The Arm MCP Server provides a rich set of tools and knowledge base, but to make the best use of it, you should pair it with Arm-specific prompt files. These prompt files supply task-oriented context, best practices, and structured workflows that guide the agent in using MCP tools more effectively across common Arm development tasks.

#### Get the prompt files

Browse the [agent integrations directory](https://github.com/arm/mcp/tree/main/agent-integrations/gemini) to find prompt files for specific use cases:

- **Arm migration** ([arm-migration.toml](https://github.com/arm/mcp/blob/main/agent-integrations/gemini/arm-migration.toml)): Helps the agent systematically migrate applications from x86 to Arm, including dependency analysis, compatibility checks, and optimization recommendations.

Each prompt file is a TOML configuration that you can reference in your Gemini CLI sessions to enable more targeted, task-specific assistance.

If you're facing issues or have questions, reach out to mcpserver@arm.com.

You're now ready to use Gemini CLI with the Arm MCP server for Arm-specific development assistance.
You're now ready to use Gemini CLI with the Arm MCP server for Arm-specific development assistance.
16 changes: 15 additions & 1 deletion content/install-guides/github-copilot.md
Original file line number Diff line number Diff line change
Expand Up @@ -335,6 +335,20 @@ Example prompts that use the Arm MCP Server:
- `Search the Arm knowledge base for Neon intrinsics examples`
- `Find learning resources about migrating from x86 to Arm`

## Use Arm prompt files with the MCP Server

The Arm MCP Server provides a rich set of tools and knowledge base, but to make the best use of it, you should pair it with Arm-specific prompt files. These prompt files supply task-oriented context, best practices, and structured workflows that guide the agent in using MCP tools more effectively across common Arm development tasks.

### Get the prompt files

Browse the [agent integrations directory for Visual Studio Code](https://github.com/arm/mcp/tree/main/agent-integrations/vs-code) to find prompt files for specific use cases:

- **Arm migration** ([arm-migration.prompt.md](https://github.com/arm/mcp/blob/main/agent-integrations/vs-code/arm-migration.prompt.md)): Helps the agent systematically migrate applications from x86 to Arm, including dependency analysis, compatibility checks, and optimization recommendations.

Each prompt file is a Markdown configuration that you can reference in your GitHub Copilot sessions to enable more targeted, task-specific assistance.

If you're facing issues or have questions, reach out to mcpserver@arm.com.

## Troubleshooting MCP Server connections

This section helps you resolve common issues when installing and using GitHub Copilot with the Arm MCP Server on Arm systems. If you encounter problems not covered here, contact [mcpserver@arm.com](mailto:mcpserver@arm.com) for support.
Expand All @@ -349,4 +363,4 @@ If the Arm MCP Server doesn't connect:



You're now ready to use GitHub Copilot with the Arm MCP Server to enhance your Arm development workflow!
You're now ready to use GitHub Copilot with the Arm MCP server for Arm-specific development assistance.
16 changes: 14 additions & 2 deletions content/install-guides/kiro-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,18 @@ Use the `/tools` command to list the available tools:

You should see the Arm MCP server tools listed in the output. If the arm-mcp server says it's still loading, wait a moment and run `/tools` again.

If you are facing issues or have questions, reach out to mcpserver@arm.com.
### Use Arm prompt files with the MCP Server

You're ready to use Kiro CLI.
The Arm MCP Server provides a rich set of tools and knowledge base, but to make the best use of it, you should pair it with Arm-specific prompt files. These prompt files supply task-oriented context, best practices, and structured workflows that guide the agent in using MCP tools more effectively across common Arm development tasks.

#### Get the prompt files

Browse the [agent integrations directory for Kiro](https://github.com/arm/mcp/tree/main/agent-integrations/kiro) to find prompt files for specific use cases:

- **Arm migration** ([arm-migration.md](https://github.com/arm/mcp/blob/main/agent-integrations/kiro/arm-migration.md)): Helps the agent systematically migrate applications from x86 to Arm, including dependency analysis, compatibility checks, and optimization recommendations.

Each prompt file is a Markdown configuration that you can reference in your Kiro CLI sessions to enable more targeted, task-specific assistance.

If you're facing issues or have questions, reach out to mcpserver@arm.com.

You're now ready to use Kiro CLI with the Arm MCP server for Arm-specific development assistance.
2 changes: 1 addition & 1 deletion content/install-guides/multipass.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ ecosystem_dashboard: https://developer.arm.com/ecosystem-dashboard/linux?package

test_images:
- ubuntu:latest
test_maintenance: true
test_maintenance: false

### PAGE SETUP
weight: 1 # Defines page ordering. Must be 1 for first (or only) page.
Expand Down
2 changes: 1 addition & 1 deletion content/install-guides/perf.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ ecosystem_dashboard: https://developer.arm.com/ecosystem-dashboard/linux?package

test_images:
- ubuntu:latest
test_maintenance: true
test_maintenance: false

### PAGE SETUP
weight: 1 # Defines page ordering. Must be 1 for first (or only) page.
Expand Down
2 changes: 1 addition & 1 deletion content/install-guides/pytorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ ecosystem_dashboard: https://developer.arm.com/ecosystem-dashboard/linux?package
test_images:
- ubuntu:latest
test_link: null
test_maintenance: true
test_maintenance: false
title: PyTorch
tool_install: true
weight: 1
Expand Down
Original file line number Diff line number Diff line change
@@ -1,31 +1,31 @@
---
title: Run image classification on an Alif Ensemble E8 DevKit with ExecuTorch and Ethos-U85
title: Run image classification on an Alif Ensemble E8 DevKit using ExecuTorch and Ethos-U85

description: Deploy a MobileNetV2 image classification model to an Alif Ensemble E8 DevKit and run inference on the Ethos-U85 NPU.

draft: true
cascade:
draft: true

minutes_to_complete: 120

who_is_this_for: This Learning Path is for embedded developers who want to deploy a neural network on an Arm Cortex-M55 microcontroller with an Ethos-U85 NPU. You will compile a MobileNetV2 model using ExecuTorch, embed it into bare-metal firmware, and run image classification on the Alif Ensemble E8 DevKit.
who_is_this_for: This is an advanced topic for embedded developers who want to deploy a neural network model to an Arm Cortex-M55 microcontroller using ExecuTorch and an Ethos-U85 NPU.

learning_objectives:
- Compile a MobileNetV2 model for the Ethos-U85 NPU using ExecuTorch's ahead-of-time (AOT) compiler on an Arm-based cloud instance.
- Build ExecuTorch static libraries for bare-metal Cortex-M55 targets.
- Configure CMSIS project files, memory layout, and linker scripts for a large ML workload on the Alif Ensemble E8.
- Run real-time image classification inference on the Ethos-U85 NPU and verify results through SEGGER RTT.
- Configure CMSIS project files, memory layout, and linker scripts for an ML workload on the Alif Ensemble E8.
- Run real-time image classification inference on the Ethos-U85 NPU and verify results using SEGGER Real-Time Transfer (RTT).

prerequisites:
- An Alif Ensemble E8 DevKit with a USB-C cable.
- A SEGGER J-Link debug probe (the DevKit has one built in).
- A development machine running macOS (Apple Silicon) or Linux.
- (Optional) An AWS account or access to an Arm-based cloud instance (Graviton c7g.4xlarge recommended). You can also build ExecuTorch locally on an Arm-based machine, though the steps will differ.
- Basic familiarity with C/C++ and embedded development concepts.
- VS Code installed on your development machine.
- Experience with C/C++ and embedded development concepts.
- An [Alif Ensemble E8 DevKit](https://alifsemi.com/support/kits/ensemble-e8devkit/) with a USB-C cable.
- A SEGGER J-Link debug probe (included in the DevKit).
- A development machine running macOS on Apple Silicon with Visual Studio Code installed.
- An AWS account or access to an Arm-based cloud instance for native Arm compilation.

author: Gabriel Peterson

### Tags
skilllevels: Advanced
subjects: ML
armips:
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight: 5
layout: "learningpathall"
---

## Overview
## What the application code does

The application code initializes the Ethos-U85 NPU, loads the MobileNetV2 model through ExecuTorch, runs inference on an embedded test image, and prints the classification result over SEGGER RTT.

Expand All @@ -16,7 +16,7 @@ Rather than building this code line by line, you download the complete `main.cpp
Download the working `main.cpp` from the workshop repository and place it in your project:

```bash
cd ~/repo/alif/alif_vscode-template/mv2_runner
cd ~/alif/alif_vscode-template/mv2_runner
curl -L -o main.cpp \
https://raw.githubusercontent.com/ArmDeveloperEcosystem/workshop-ethos-u/main/main.cpp
```
Expand All @@ -25,7 +25,7 @@ curl -L -o main.cpp \
If you prefer, you can clone the full repository with `git clone https://github.com/ArmDeveloperEcosystem/workshop-ethos-u.git` and copy `main.cpp` from there.
{{% /notice %}}

The following sections explain what the code does. You don't need to modify anything; the downloaded file is ready to build.
The following sections explain what the code does. The downloaded file is ready to build as-is.

## Fault handlers

Expand Down Expand Up @@ -124,19 +124,12 @@ The method allocator holds the loaded model graph. The temp allocator provides s

## The inference pipeline

The `run_inference()` function follows a 10-step pipeline:
The `run_inference()` function handles the full pipeline from model loading to output. It starts by initializing the ExecuTorch runtime and creating a zero-copy data loader that reads the compiled `.pte` model directly from flash memory. The program is then parsed and method metadata queried to determine how much planned memory the model needs.

1. **Initialize** the ExecuTorch runtime.
2. **Create a data loader** that reads the model directly from flash memory (zero-copy).
3. **Load the program** (parse the `.pte` flatbuffer).
4. **Query method metadata** to find out how many planned buffers the model needs and how large they are.
5. **Set up planned memory** by carving sub-allocations from the SRAM1 pool.
6. **Create the memory manager** that ties together the method, temp, and planned allocators.
7. **Load the method** (the `forward` function of the model).
8. **Prepare the input tensor**: convert the embedded int8 image data to float32 (the model's first operator is `quantize_per_tensor`, which expects float input).
9. **Execute inference**: the quantize op runs on the CPU, the entire MobileNetV2 backbone runs as a single NPU command stream on the Ethos-U85, and the dequantize op runs back on the CPU.
10. **Read the output**: find the argmax of the 1000-class output vector to get the predicted ImageNet class.
Memory is set up next: sub-allocations are carved from the SRAM1 pool for planned buffers, and a memory manager ties together the method, temp, and planned allocators. Once memory is in place, the `forward` method is loaded.

The NPU handles the bulk of the computation. The CPU-side overhead (ExecuTorch loading, input conversion, quantize/dequantize) is small compared to the NPU workload.
Before inference runs, the input tensor is prepared by converting the embedded int8 image data to float32. This is needed because the model's first operator is `quantize_per_tensor`, which expects float input. Inference then runs in three stages: the quantize operator executes on the CPU, the entire MobileNetV2 backbone runs as a single NPU command stream on the Ethos-U85, and the dequantize operator runs back on the CPU. Finally, the argmax of the 1000-class output vector gives the predicted ImageNet class.

You now have the application code in place. The next section configures the memory layout to accommodate the model and ExecuTorch runtime.
The NPU handles the bulk of the computation. The CPU-side overhead of ExecuTorch loading, input conversion, and quantize/dequantize is small compared to the NPU workload.

The application code is in place. The next section configures the memory layout to accommodate the model and ExecuTorch runtime.
Loading
Loading