Heterogeneous Builds — Zephyr + Yocto on the same SoM
Walkthrough for a dual-app project on E1M-V2N101: Yocto Linux on the four Cortex-A55 cores plus Zephyr on the Cortex-M33 system-manager, the two halves talking over RPMsg. You'll declare both halves in a single board.yaml, let west alp-build fan out into per-core slices, and end up with a flashable bundle that covers Linux + Zephyr + the on-module GD32 helper MCU.
The same pattern generalises to E1M-AEN E5..E8 (A32 + M55-HP + M55-HE), E1M-N93 (A55 + M33), and any future heterogeneous SoM.
Single-OS SoM (e.g. AEN E3/E4 with M55 cores only)? Follow Quick start instead. The orchestrator handles single-slice fan-outs too, but you don't need the cross-core machinery this guide focuses on.
1. What you'll have at the end
- A V2N project that boots Yocto Linux on the A55 cluster.
- A Zephyr image running on the M33-SM that the kernel brings up via remoteproc on first boot.
- A two-way RPMsg channel between the two halves, accessed through
<alp/rpc.h>. - A
system-manifest.yamlthat feedswest alp-image,west alp-flash, and OTA.
Out of scope: writing Yocto recipes from scratch (Yocto docs); writing Zephyr drivers from scratch (Zephyr docs); the wire-level RPMsg protocol details (OpenAMP docs).
2. Prerequisites
- West workspace bootstrapped —
bash scripts/bootstrap.shfrom the SDK root. - Zephyr SDK 1.0.1 installed (
ZEPHYR_SDK_INSTALL_DIRexported) — only for the Zephyr slice's real-silicon target. Not required fornative_sim/native/64smoke builds, which use host gcc withZEPHYR_TOOLCHAIN_VARIANT=host. CI'spr-twisterruns container-less onubuntu-latestwith the same host-gcc setting. - Yocto build host set up (50+ GB free, Poky host packages).
- Plan for ~30 GB of
build/<core>-yocto/tmp/on the first cold build. Subsequent builds reusesstate-cacheand stay small.
3. Project layout
A dual-app project keeps each half in its own sub-directory. Sub-directory names match the cores: keys in board.yaml exactly — the orchestrator uses them to route generated config and find source trees.
examples/rpmsg-v2n/
├── board.yaml (v2; declares a55_cluster + m33_sm)
├── README.md
├── linux/ (a55_cluster's app)
│ ├── CMakeLists.txt
│ └── src/main.c (consumer using <alp/rpc.h>)
└── m33_sm/ (m33_sm's app)
├── CMakeLists.txt
├── prj.conf
└── src/main.c (producer using <alp/rpc.h>)
linux/ and m33_sm/ are conventions, not magic — the cores.<id>.app: path in board.yaml binds them. Matching the core ID keeps the layout easy to read.
Single-OS examples don't change shape: they keep their flat src/ layout and declare a single core in board.yaml. The sub-directory split is opt-in per project.
4. The cores: block, walked through
schema_version: 2
som:
sku: E1M-V2N101
hw_rev: r1
carrier:
name: E1M-X-EVK
cores:
a55_cluster:
os: yocto
app: ./linux
image: alp-image-edge
peripherals: [ethernet, usb, emmc]
libraries: [mbedtls, nlohmann_json]
iot: { wifi: true, mqtt: true }
m33_sm:
os: zephyr
app: ./m33_sm
peripherals: [adc, pwm, i2c, gpio]
libraries: [cmsis_dsp]
inference: { backend: cpu }
ipc:
- kind: rpmsg
endpoints: [a55_cluster, m33_sm]
carve_out_kb: 512
name: alp_default_rpmsg
diagnostics:
log_level: info
os values per core:
| Value | Meaning |
|---|---|
yocto | A-class core(s) running Linux from a bitbake rootfs. |
zephyr | M-class core running Zephyr. |
baremetal | M-class core running a bare-metal CMake app (no RTOS). |
off | Core present in silicon but intentionally not used. |
off is a first-class state — no implicit "did we forget a core?" failure mode. The recommended pattern on AEN E5..E8 is to declare every on-die core explicitly so the project's intent is self-documenting:
cores:
a32_cluster: { os: yocto, app: ./linux, image: alp-image-edge }
m55_hp: { os: zephyr, app: ./m55_hp, peripherals: [i2c] }
m55_he: { os: off } # peer core present, unused here
The remaining per-core fields (peripherals, libraries, iot, inference) are scoped to that slice. The M33-SM doesn't carry networking on V2N, so iot: only appears under a55_cluster. Inference backend (cpu / npu / gpu) is also per-slice.
5. The ipc: block
Each entry declares one cross-core channel.
ipc:
- kind: rpmsg
endpoints: [a55_cluster, m33_sm]
carve_out_kb: 512
name: alp_default_rpmsg
kind: rpmsg— the only supported value in v0.6. Future kinds (raw shmem, virtio-net) are reserved.endpoints— the cores sharing this channel. Both must haveos: != off. Exactly two; RPMsg is point-to-point.carve_out_kb— shared-memory region size in kibibytes. The orchestrator allocates from the SoM preset'smemory_map:, preferring non-cacheable on SoMs with no M-class cache (V2N), cacheable + auto-generated cache-maintenance on SoMs that do (AEN).name— stable identifier. Becomes the resource-table label on OpenAMP, the Linux DTreserved-memorynode label, and the#defineprefix in the generated header. Stick to[a-z][a-z0-9_]+.
For each ipc: entry, west alp-build emits a header both halves #include:
/* build/generated/alp/system_ipc.h — auto-generated, do not edit.
The channel `name:` is upper-cased and prepended with ALP_IPC_,
so `name: alp_default_rpmsg` yields the ALP_IPC_ALP_DEFAULT_RPMSG_*
macro stem (note the doubled `ALP_`). */
#define ALP_IPC_ALP_DEFAULT_RPMSG_NAME "alp_default_rpmsg"
#define ALP_IPC_ALP_DEFAULT_RPMSG_ADDR 0x10078000u
#define ALP_IPC_ALP_DEFAULT_RPMSG_SIZE 0x00080000u
#define ALP_IPC_ALP_DEFAULT_RPMSG_SRC_EPT 0x00000401u
#define ALP_IPC_ALP_DEFAULT_RPMSG_DST_EPT 0x00000402u
#define ALP_IPC_ALP_DEFAULT_RPMSG_MBOX_CH 0u
Both linux/src/main.c and m33_sm/src/main.c #include <alp/system_ipc.h> and use the same constants. Endpoint IDs are derived from name deterministically — re-running the build produces byte-identical headers. Drift between the Linux DT and the Zephyr overlay becomes impossible.
6. Building
west alp-build examples/rpmsg-v2n
The orchestrator:
- Loads + validates
board.yamlagainst the v2 schema. - Resolves the SoM preset → topology defaults → effective per-core mapping.
- For each core with
os: != off, materialises per-core config (build/m33_sm-zephyr/alp.conf,build/a55_cluster-yocto/conf/local.conf). - Emits shared generated artefacts (
alp/system_ipc.h,dts-reservations.dtsi). - Registers helper-MCU artefacts (GD32, CC3501E).
- Dispatches slice builds in parallel.
- Writes
build/system-manifest.yamljoining everything together.
Output layout:
build/
├── a55_cluster-yocto/
│ ├── conf/local.conf
│ └── tmp/deploy/images/e1m-v2n101-a55/{rootfs.wic.gz, Image, *.dtb}
├── m33_sm-zephyr/
│ └── zephyr/zephyr.elf
├── helper-gd32/
│ └── gd32_bridge.bin
├── helper-cc3501e/
│ └── cc3501e_otp.blob
├── generated/
│ ├── alp/system_ipc.h
│ ├── dts-reservations.dtsi
│ └── alp_hw_info_build.h
└── system-manifest.yaml
Iterating on one slice
The Yocto cold build takes hours; the Zephyr build takes seconds. When iterating on the M-side firmware, rebuild only that slice:
west alp-build examples/rpmsg-v2n --core m33_sm
The orchestrator skips the Yocto fan-out, re-uses the previous manifest, and rebuilds only build/m33_sm-zephyr/. Slice failures don't cascade — system-manifest.yaml carries per-slice status: ok | failed; re-running re-attempts only the failed slices.
7. Flashing
west alp-image # → build/image-bundle/alp-system.zip + .swu (Mender)
west alp-flash # programs attached hardware
alp-image consumes system-manifest.yaml and assembles a single flashable bundle:
- The Yocto
.wic.gzrootfs. - The Zephyr
.elf(installed into the rootfs at/lib/firmware/alp/E1M-V2N101/m33_sm.elfso remoteproc picks it up on first boot). - Helper-MCU firmware (
gd32_bridge.bin,cc3501e_otp.blob). - A Mender
.swufor OTA.
alp-flash walks the manifest's boot_order: and programs each piece with the right backend tool (vendor flasher for the SoC, openocd-via-SWD for the GD32 helper, USB-CDC bootloader for CC3501E). You don't pick the tool.
8. Debugging
Per-slice logs
Each slice gets its own log directory under build/<core>-<os>/:
build/m33_sm-zephyr/build.log— Zephyr CMake + ninja output.build/a55_cluster-yocto/log/bitbake.log— bitbake task output.build/helper-gd32/build.log— GD32 firmware build.
system-manifest.yaml carries each slice's log_path: so tooling jumps straight to the right log on a failure.
Attaching a debugger
- A55 cluster (Linux):
alp-image-edgeships withgdbserver. SSH in, attach to your process. - M33-SM (Zephyr): SWD via openocd or J-Link. The orchestrator installs
build/m33_sm-zephyr/openocd.cfg;west debug --build-dir build/m33_sm-zephyrattaches a GDB session. - Cross-core sanity check: print your endpoint IDs on both sides with
printk("ept=%u\n", ALP_IPC_ALP_DEFAULT_RPMSG_SRC_EPT)— they must matchsystem-manifest.yaml'sipc[].rpmsg_endpoint_idsfield.
Renode smoke test
No board needed to verify the heterogeneous handshake:
west alp-renode
Renode loads both slice images, simulates RPMsg over its mailbox peripheral, and runs a name-service ping/pong. CI uses the same command in pr-renode-dual-os.yml.
CI status:
pr-alp-build,pr-bitbake, andpr-renode-dual-osare advisory until the v0.7 self-hosted toolchain runners (Zephyr SDK, bitbake, Renode) land. The manifest-shape + determinism gates run onubuntu-latestand block merges; slice-build failures don't.
9. Cross-core API
<alp/rpc.h> is the customer-facing IPC API. It sits on OpenAMP and uses the generated endpoint constants — apps don't type addresses, endpoint IDs, or mailbox channels by hand.
Producer (M33-SM)
/* m33_sm/src/main.c */
#include <alp/rpc.h>
#include <alp/system_ipc.h> /* generated by west alp-build */
#include <zephyr/kernel.h>
int main(void) {
alp_rpc_channel_t *ch = alp_rpc_open(&(alp_rpc_config_t){
.name = ALP_IPC_ALP_DEFAULT_RPMSG_NAME,
.src_ept = ALP_IPC_ALP_DEFAULT_RPMSG_SRC_EPT,
.dst_ept = ALP_IPC_ALP_DEFAULT_RPMSG_DST_EPT,
});
if (ch == NULL) {
return -1; /* alp_last_error() reports why */
}
while (1) {
float temperature_c = read_thermistor();
alp_rpc_call(ch, "temperature",
&temperature_c, sizeof(temperature_c));
k_msleep(1000);
}
}
Consumer (A55)
/* linux/src/main.c */
#include <alp/rpc.h>
#include <alp/system_ipc.h>
#include <stdio.h>
#include <unistd.h>
static void on_temperature(const void *buf, size_t len, void *user) {
if (len == sizeof(float)) {
printf("[a55] temperature=%.2f C\n", *(const float *)buf);
}
}
int main(void) {
alp_rpc_channel_t *ch = alp_rpc_open(&(alp_rpc_config_t){
.name = ALP_IPC_ALP_DEFAULT_RPMSG_NAME,
.src_ept = ALP_IPC_ALP_DEFAULT_RPMSG_DST_EPT, /* swap src/dst */
.dst_ept = ALP_IPC_ALP_DEFAULT_RPMSG_SRC_EPT,
});
alp_rpc_subscribe(ch, "temperature", on_temperature, NULL);
for (;;) pause();
}
Both sides #include the same generated header, so endpoint IDs match by construction. The producer's src_ept is the consumer's dst_ept and vice versa — that symmetry is the only piece a developer keeps straight. For multiple channels, declare multiple ipc: entries with distinct name: values.
10. Common pitfalls
Forgetting to declare ipc:. Call alp_rpc_open() for a name that doesn't appear in any ipc: block and you won't compile — <alp/system_ipc.h> doesn't carry the matching constants. Every cross-core touchpoint is declared at build time, not discovered at runtime.
Cache coherency on AEN. V2N's default carve-out is non-cacheable because the M33-SM has no data cache. AEN's M55 cores do have a cache, so the default flips to cacheable with auto-generated cache-maintenance points in alp_rpc_*. Don't write cache ops by hand.
Boot ordering. Linux brings the M33 up via remoteproc; the M33 can't talk to the A55 until userspace pokes /sys/class/remoteproc/.../state = start. App code should re-try alp_rpc_open() with backoff — or use alp_rpc_open_blocking() which loops until the peer answers.
See also
<alp/rpc.h>— full RPC API referenceboard.yamlreference — schema v2rpmsg-v2n·rpmsg-aen·rpmsg-imx93·heterogeneous-offload— flagship examples- ADR 0010 — design rationale