Architecture
The ALP SDK is a thin unification layer on top of three OS backends and four (today) vendor SDKs. This page explains the layering, the boundaries, and how a single board.yaml flows through the build.
Stack overview
┌──────────────────────────────────────────────────────────────────┐
│ Application code │
│ (yours, or alp-studio codegen) │
└──────────────────────────────────────────────────────────────────┘
▼
┌──────────────────────────────────────────────────────────────────┐
│ ALP SDK <alp/*.h> │
│ ─ public surface (handles, error codes, instance IDs) │
│ ─ <alp/soc_caps.h> (generated per SoC) │
│ ─ <alp/e1m_pinout.h> (portable instance IDs) │
│ ─ <alp/chips/*> (20+ opt-in chip drivers) │
└──────────────────────────────────────────────────────────────────┘
▼
┌──────────────────────────────────────────────────────────────────┐
│ Backend implementations src/zephyr/, src/baremetal/, src/yocto/│
│ ─ one .c per public class, per OS │
│ ─ thin wrappers calling the OS's driver model │
└──────────────────────────────────────────────────────────────────┘
▼
┌──────────────────────────────────────────────────────────────────┐
│ OS layer │
│ ─ Zephyr device drivers / Linux ioctls / Alif HAL │
└──────────────────────────────────────────────────────────────────┘
▼
┌──────────────────────────────────────────────────────────────────┐
│ Vendor HAL vendors/<vendor>/ │
│ ─ Alif Ensemble · Renesas RZ/V2N · NXP i.MX 93 · DEEPX DX-M1 │
└──────────────────────────────────────────────────────────────────┘
▼
E1M / E1M-X hardware
Why a wrapper, not a re-implementation
ADR 0001 records the decision: the SDK wraps the OS's driver model rather than reimplementing peripheral access. On Zephyr that means alp_i2c_* forwards to Zephyr's i2c_* API; on Yocto it forwards to i2c-dev ioctls; on bare-metal it calls the vendor HAL directly.
The win: the SDK gets every OS's driver fix, security patch, and silicon enablement for free. The cost: the public API can never assume features the underlying OS doesn't expose — the wrapper layer must stay shallow.
Yocto layer
The Yocto backend ships as the meta-alp-sdk Yocto layer at the top of the SDK repo. The layer carries per-cluster MACHINE confs (e1m-v2n101-a55, e1m-v2m101-a55, e1m-nx9101-a55, e1m-aen501-a32, …), the per-SKU recipes for ALP chip drivers + libraries, the userspace remoteproc helper (alp-remoteproc_0.6.bb), the generated reserved-memory: fragment (alp-dts-reservations_0.6.bb), and userspace bindings for <alp/storage.h> / <alp/iot.h> / <alp/inference.h> / <alp/rpc.h>.
Per-cluster MACHINE naming (not per-SoM) became normative in v0.6 — the M-class peer on a V2N is no longer implied to be part of the same MACHINE as the A55 cluster.
Add BBLAYERS += "${TOPDIR}/../alp-sdk/meta-alp-sdk" to bblayers.conf and pick the right MACHINE; the orchestrator emits the supporting IMAGE_INSTALL deltas from your board.yaml automatically.
Migration note. The old
yocto/meta-alp/tree was deleted in v0.6 — its content was absorbed intometa-alp-sdk/. Customers consuming the old layer name should re-point atmeta-alp-sdk.
Vendor SDK / HAL integration
The SDK doesn't re-implement vendor HALs — it pulls them in as Zephyr modules pinned in west.yml:
- Alif Ensemble — upstream
sdk-alifHAL via the AEN board files inalplabai/alp-zephyr-modules. - Renesas RZ/V2N — Zephyr's
hal_renesasalready mirrors the RZ/V FSP for the N44 SoC;west updatepulls it automatically. - NXP i.MX 9352 — MCUXpresso via Zephyr's
hal_nxpmodule (M33 core). - DEEPX DX-M1 — userland from upstream
github.com/DEEPX-AI/dx_rt; Yocto layer viagithub.com/DEEPX-AI/meta-deepx-m1.
The on-module TI CC3501E Wi-Fi/BT coprocessor on AEN is not a vendor SDK dependency — its firmware ships pre-built from alplabai/cc3501e-firmware and the orchestrator bundles it into the image like any other helper-MCU artefact. Host code talks to it through the documented inter-chip protocol; no TI SimpleLink SDK is built into the SDK.
Per-vendor integration status, license terms, and example pins live in docs/vendor-partnerships.md.
ABI stability annotations
Every public header carries an [ABI-STABLE] or [ABI-EXPERIMENTAL] marker on the file-level Doxygen block. Stable headers carry an ABI snapshot in the SDK repo and the pr-abi-snapshot.yml CI job blocks merges that change the binary surface without an explicit bump. Experimental headers (e.g. <alp/gpu2d.h>, the wave-2 <alp/dsp.h> chain types, <alp/power.h>'s system-mode setter) reserve the right to change shape before v1.0 — pin to a specific SDK commit if you depend on them.
The full annotation policy + machinery live in docs/abi-stability.md.
Error model
Every alp_*_open() returns a handle or NULL. Every operation returns an alp_err_t integer (0 = ALP_OK). When a handle is NULL, alp_last_error() — a thread-local — tells you why:
| Error | Meaning |
|---|---|
ALP_ERR_NOSUPPORT | Peripheral / feature not implemented for this OS / SoM yet. |
ALP_ERR_OUT_OF_RANGE | Requested config exceeds the SoC's documented caps. |
ALP_ERR_NOT_READY | Chip not populated (ACK-probe failed) or rail not up. |
ALP_ERR_TIMEOUT | Hardware did not respond within the configured window. |
ALP_ERR_IO | Bus / link error (also: AEAD tag mismatch on decrypt). |
ADR 0002 covers the full contract. The point of alp_last_error() is that hand-written firmware can diagnose NULL returns without changing the API shape — no out-parameters, no global state on a successful path.
Capability validation
Three layers protect against asking for hardware features that don't exist:
- Build-time — the orchestrator cross-checks every entry in each
cores.<id>.peripherals:against the SoC'smetadata/socs/<vendor>/<family>/<part>.jsoncapability profile scoped to that core. Aboard.yamlasking fori2son a core that doesn't route I²S fails atwest alp-buildtime with exit code 3, before any compile work.cores:keys are also cross-checked against the SoM preset'stopology:block — a typo in a core ID fails the same gate. - Capability-keyed library bindings — each library that depends on a hardware accelerator declares its priority list with a
requires_cap:matcher (e.g.ethos_u85_count). The loader reads each SoM preset'scapabilities:block and emits the matchingCONFIG_*for the highest-priority capability the SKU actually has. The Ethos-U55 → U85 split on Alif Ensemble (E4 / E6 / E8 carry both; E3 / E5 / E7 carry only U55) is driven entirely by this mechanism — application code never names the NPU. - Runtime —
<alp/soc_caps.h>is generated from the samemetadata/socs/JSON and consulted by*_open(). E.g.alp_adc_openwithresolution_bits = 16on a 12-bit SoC returnsNULLand stampsALP_ERR_OUT_OF_RANGE.
E1M portability bound
E1M_<CLASS>_COUNT macros in <alp/e1m_pinout.h> document the cross-SoM-portable instance count per peripheral class. An app that opens E1M_I2C0 and E1M_I2C1 (with the count below 2) works on every conformant SoM. ADR 0004.
If your app needs more instances than the portable bound, it stays correct on the SoMs that route them — it just isn't portable to ones that don't.
OS backend selection
board.yaml v2 declares the backend per on-die core under cores.<id>.os. The SDK ships three runtimes; one project can use all three simultaneously:
| Backend | Selected by | Implementation lives in | Comments |
|---|---|---|---|
| Zephyr | os: zephyr | src/zephyr/ | First-class on AEN M55 + V2N / iMX93 M-class peers. Zephyr v4.4.0 pinned. |
| Yocto | os: yocto | src/yocto/ | First-class on V2N / V2N-M1 / iMX93 / AEN-E5..E8 A-class clusters. Uses i2c-dev, spidev, GPIO chardev v2, ALSA, OpenSSL, libmosquitto. |
| Bare-metal | os: baremetal | src/baremetal/ | Calls vendor HAL directly. Coverage lands per the test plan. |
| (skip core) | os: off | — | Explicit "leave this core dark in this project". |
A bare som: { sku: <MPN> } with no cores: overrides inherits the SoM preset's topology: defaults — every heterogeneous SoM produces a working dual-image build out of the box.
Two consumer paths, same API
ADR 0005 records the SDK ↔ Studio boundary.
Hand-written
You write <alp/...> calls directly. Instance IDs come from <alp/e1m_pinout.h>. The build is west alp-build → west alp-image → west alp-flash (or west alp-build → west build for single-slice Zephyr projects).
alp-studio codegen
Studio reads its block library + your project document, runs a pin allocator against the active SoM, and emits C that calls the same <alp/...> API. Switching between paths is non-destructive — Studio's generated code coexists with hand-written code in the same project.
Orchestrator — scripts/alp_orchestrate.py
The orchestrator is the SDK's central tool in v0.6 — it replaced the single-slice alp_project.py loader when board.yaml schema v2 landed. Inputs:
board.yamlat the app root (schema v2)metadata/e1m_modules/<MPN>.yaml(SoM preset — carriestopology:,memory_map:,mailbox:,helper_firmware:blocks)metadata/carriers/<carrier>.yaml(carrier preset)metadata/socs/<vendor>/<family>/<part>.json(silicon capability profile;cores[].idset is the topology key set)metadata/library-profiles/<lib>/(compile-time tuning for libraries)
Generated artefacts (byte-stable across rebuilds):
| Path | What it carries |
|---|---|
build/system-manifest.yaml | Per-slice status, log paths, artefact paths, boot order. |
build/generated/alp/system_ipc.h | Endpoint IDs, addresses, mailbox channel macros — shared by all slices. |
build/generated/dts-reservations.dtsi | reserved-memory: carve-outs shipped into Linux + Zephyr DTs. |
build/<core>-zephyr/alp.conf | Kconfig fragment layered onto each Zephyr slice's prj.conf. |
build/<core>-yocto/conf/alp-generated.conf | local.conf snippet consumed by bitbake (MACHINE=…, IMAGE_INSTALL). |
The system manifest is consumed downstream by west alp-image, west alp-flash, west alp-renode, and OTA.
West command split
v0.5's monolithic west alp-build was split in v0.6 into five focused commands under scripts/west_commands/:
| Command | Purpose |
|---|---|
west alp-build | Fan board.yaml out into per-core slices, emit system-manifest.yaml. |
west alp-image | Consume the manifest, assemble a flashable bundle (build/image-bundle/). |
west alp-flash | Walk the manifest's boot_order: and program each piece with the right backend. |
west alp-clean | Tear down per-slice build dirs idempotently. |
west alp-renode | Boot the dual-OS image in Renode and drive the RPMsg handshake smoke test. |
west alp-flash dispatches per artefact: vendor flasher for the SoC, openocd-via-SWD for the GD32 helper MCU on V2N, USB-CDC bootloader for the CC3501E on AEN. No developer-side tool-selection.
See the board.yaml reference and the heterogeneous builds walkthrough.
Hardware identification
Two-stage SoM-ID flow runs at boot on every Alp Lab module:
- EEPROM manifest — 128-byte block on the on-module 24C128 EEPROM carrying family / SKU / hw_rev / serial / mfg date. Read via
alp_hw_info_read()from<alp/hw_info.h>. - BOARD_ID ADC — per-rev resistor divider sampled at boot to confirm the running firmware matches the hardware revision.
The build-time companion header (<alp_hw_info_build.h>, emitted by the loader) bakes the customer's board.yaml identifiers into ALP_HW_BUILD_* macros so alp_hw_info_assert_matches_build() can fail fast if the wrong firmware is flashed onto a unit.
Production-test programs the manifest with scripts/program_eeprom.py.
Repository split
The SDK is intentionally distributed across several repositories so each can evolve on its own cadence:
| Repo | Contains |
|---|---|
alp-sdk | Portable <alp/...> API, chip drivers, OS backends, metadata, examples. |
e1m-spec | The E1M open-standard pinout and mechanical spec (CC BY-SA 4.0). |
alp-zephyr-modules | Zephyr board files for the official ALP Lab EVKs. |
alp-sdk-vscode | VS Code extension — schema-aware editing, GUI configurator, west wrappers. |
alp-studio | The Studio codegen tool — emits <alp/...> calls from block manifests. |
The split lets a customer pin alp-sdk and bump alp-zephyr-modules independently when the EVK schematic respins — and vice versa.
Architecture Decision Records
ADRs that explain why the SDK looks the way it does:
| ADR | Topic |
|---|---|
| 0001 | Wrapper over Zephyr / Yocto, not reimplementation |
| 0002 | alp_last_error() thread-local diagnostic |
| 0003 | Twelve wrapped peripheral classes |
| 0004 | E1M_<CLASS>_COUNT portability bound |
| 0005 | SDK ↔ Studio boundary |
| 0006 | Secure boot + secure OTA chain |
| 0007 | Wave-2 DSP — pipeline stages, not standalone primitives |
| 0008 | <alp/gpu2d.h> portable shim even for single-silicon |
| 0009 | Mender Zephyr client deferred to v1.1; secure OTA on Zephyr cut from v0.4 |
| 0010 | Heterogeneous OS orchestration — Zephyr/Yocto/baremetal coexist per-core |