Skip to main content

rpmsg-v2n

[UNTESTED] — v0.6 paper-correct

Both halves declared in one board.yaml v2; the orchestrator fans out Yocto + Zephyr in parallel. HiL bring-up on V2N + Yocto gates on v0.8.

Heterogeneous-compute flagship: Yocto Linux on the V2N's Cortex-A55 cluster, Zephyr RTOS on the same V2N's Cortex-M33 system-manager, talking over a framed RPMsg channel. One SoM, real-time plus Linux, one declarative source of truth.

Source: examples/rpmsg-v2n/.

Layout

examples/rpmsg-v2n/
├── board.yaml (v2; declares a55_cluster + m33_sm + ipc)
├── README.md
├── linux/ (a55_cluster's Yocto slice)
│ ├── CMakeLists.txt
│ └── src/main.c (consumer using <alp/rpc.h>)
└── m33_sm/ (m33_sm's Zephyr slice)
├── CMakeLists.txt
├── prj.conf
└── src/main.c (producer using <alp/rpc.h>)

What changed vs v0.5

Prior to v0.6 the dual-OS framing lived in two places that had to stay in sync by hand: the directory's board.yaml covered the Zephyr/M33 half, and the Yocto/A55 half hid behind a separate bitbake recipe that didn't consume the same config. v0.6's orchestrator (scripts/alp_orchestrate.py) reads one board.yaml, fans out per-core slices, and emits a system manifest that the image-bundle + flash + OTA tooling consume.

What it shows

SideBuildRole
M33-SM / Zephyrm33_sm/src/main.cProducer — reads a sensor via <alp/peripheral.h> and publishes one temperature event/sec through <alp/rpc.h>.
A55 cluster / Yoctolinux/src/main.cConsumer — opens the matching RPC channel, subscribes to temperature, prints each sample to stdout.

Both binaries #include <alp/rpc.h> and <alp/system_ipc.h>. The latter is auto-emitted by the orchestrator from the project's ipc: block; both slices share identical endpoint IDs and the carve-out address by construction.

board.yaml

schema_version: 2

som:
sku: E1M-V2N101
hw_rev: r1

carrier:
name: E1M-X-EVK

cores:
a55_cluster:
os: yocto
app: ./linux
image: alp-image-edge
peripherals: [ethernet, usb, emmc]
libraries: [mbedtls, nlohmann_json]
iot: { wifi: true, mqtt: true, tls: true }
m33_sm:
os: zephyr
app: ./m33_sm
peripherals: [adc, pwm, i2c, gpio]
libraries: [cmsis_dsp]
inference: { backend: cpu }

ipc:
- kind: rpmsg
endpoints: [a55_cluster, m33_sm]
carve_out_kb: 512
name: alp_default_rpmsg

diagnostics:
log_level: info

Memory map

The orchestrator resolves the alp_default_rpmsg carve-out deterministically from E1M-V2N101's memory_map: block. The default non-cacheable region is ocram_low (512 KiB at 0x00010000).

RangeOwnerNotes
0x48000000 + …A55 (DDR)Linux kernel + rootfs (LPDDR4X main memory).
0x00010000 – 0x00090000IPCalp_default_rpmsg — ocram_low, non-cacheable.
0x80000000 + …M33-SMM33 TCM (Zephyr image + .data + .bss).

The generated <alp/system_ipc.h> carries the resolved address + size + endpoint IDs; neither side hand-writes them.

Boot order

  1. A55 cluster reads U-Boot from xSPI, hands off to Linux.
  2. systemd reaches its basic target.
  3. The remoteproc driver loads /lib/firmware/alp/E1M-V2N101/m33_sm.elf into the M33-SM core and starts it.
  4. Both sides run the alp_default_rpmsg name-service handshake over OpenAMP; alp_rpc_open() returns on the A55 side first, then on the M33.

The M33 firmware lands in the rootfs via the orchestrator's bbappend to meta-alp-sdk.

Build

cd alp-workspace
west alp-build alp-sdk/examples/rpmsg-v2n

That single command:

  1. Reads board.yaml, resolves the V2N101 preset's topology.
  2. Fans out two slices in parallel:
    • build/a55_cluster-yocto/ (bitbake against MACHINE = e1m-v2n101-a55)
    • build/m33_sm-zephyr/ (Zephyr against BOARD = alp_e1m_v2n101_m33_sm)
  3. Emits build/generated/alp/system_ipc.h + build/generated/dts-reservations.dtsi — the shared IPC contract.
  4. Writes build/system-manifest.yaml.

Iterate on just the M33 slice (skip the hour-long Yocto rebuild):

west alp-build alp-sdk/examples/rpmsg-v2n --core m33_sm

Image + flash:

west alp-image     # → build/image-bundle/alp-system.zip + .swu
west alp-flash # walks boot_order: from the manifest

See also

Questions about this page? Discuss in Community Forum