Skip to main content

Architecture

The ALP SDK is a thin unification layer on top of three OS backends and four (today) vendor SDKs. This page explains the layering, the boundaries, and how a single board.yaml flows through the build.

Stack overview

┌──────────────────────────────────────────────────────────────────┐
│ Application code │
│ (yours, or alp-studio codegen) │
└──────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────────┐
│ ALP SDK <alp/*.h> │
│ ─ public surface (handles, error codes, instance IDs) │
│ ─ <alp/soc_caps.h> (generated per SoC) │
│ ─ <alp/e1m_pinout.h> (portable instance IDs) │
│ ─ <alp/chips/*> (20+ opt-in chip drivers) │
└──────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────────┐
│ Backend implementations src/zephyr/, src/baremetal/, src/yocto/│
│ ─ one .c per public class, per OS │
│ ─ thin wrappers calling the OS's driver model │
└──────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────────┐
│ OS layer │
│ ─ Zephyr device drivers / Linux ioctls / Alif HAL │
└──────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────────────┐
│ Vendor HAL vendors/<vendor>/ │
│ ─ Alif Ensemble · Renesas RZ/V2N · NXP i.MX 93 · DEEPX DX-M1 │
└──────────────────────────────────────────────────────────────────┘

E1M / E1M-X hardware

Why a wrapper, not a re-implementation

ADR 0001 records the decision: the SDK wraps the OS's driver model rather than reimplementing peripheral access. On Zephyr that means alp_i2c_* forwards to Zephyr's i2c_* API; on Yocto it forwards to i2c-dev ioctls; on bare-metal it calls the vendor HAL directly.

The win: the SDK gets every OS's driver fix, security patch, and silicon enablement for free. The cost: the public API can never assume features the underlying OS doesn't expose — the wrapper layer must stay shallow.

Yocto layer

The Yocto backend ships as the meta-alp-sdk Yocto layer at the top of the SDK repo. The layer carries per-cluster MACHINE confs (e1m-v2n101-a55, e1m-v2m101-a55, e1m-nx9101-a55, e1m-aen501-a32, …), the per-SKU recipes for ALP chip drivers + libraries, the userspace remoteproc helper (alp-remoteproc_0.6.bb), the generated reserved-memory: fragment (alp-dts-reservations_0.6.bb), and userspace bindings for <alp/storage.h> / <alp/iot.h> / <alp/inference.h> / <alp/rpc.h>.

Per-cluster MACHINE naming (not per-SoM) became normative in v0.6 — the M-class peer on a V2N is no longer implied to be part of the same MACHINE as the A55 cluster.

Add BBLAYERS += "${TOPDIR}/../alp-sdk/meta-alp-sdk" to bblayers.conf and pick the right MACHINE; the orchestrator emits the supporting IMAGE_INSTALL deltas from your board.yaml automatically.

Migration note. The old yocto/meta-alp/ tree was deleted in v0.6 — its content was absorbed into meta-alp-sdk/. Customers consuming the old layer name should re-point at meta-alp-sdk.

Vendor SDK / HAL integration

The SDK doesn't re-implement vendor HALs — it pulls them in as Zephyr modules pinned in west.yml:

  • Alif Ensemble — upstream sdk-alif HAL via the AEN board files in alplabai/alp-zephyr-modules.
  • Renesas RZ/V2N — Zephyr's hal_renesas already mirrors the RZ/V FSP for the N44 SoC; west update pulls it automatically.
  • NXP i.MX 9352 — MCUXpresso via Zephyr's hal_nxp module (M33 core).
  • DEEPX DX-M1 — userland from upstream github.com/DEEPX-AI/dx_rt; Yocto layer via github.com/DEEPX-AI/meta-deepx-m1.

The on-module TI CC3501E Wi-Fi/BT coprocessor on AEN is not a vendor SDK dependency — its firmware ships pre-built from alplabai/cc3501e-firmware and the orchestrator bundles it into the image like any other helper-MCU artefact. Host code talks to it through the documented inter-chip protocol; no TI SimpleLink SDK is built into the SDK.

Per-vendor integration status, license terms, and example pins live in docs/vendor-partnerships.md.

ABI stability annotations

Every public header carries an [ABI-STABLE] or [ABI-EXPERIMENTAL] marker on the file-level Doxygen block. Stable headers carry an ABI snapshot in the SDK repo and the pr-abi-snapshot.yml CI job blocks merges that change the binary surface without an explicit bump. Experimental headers (e.g. <alp/gpu2d.h>, the wave-2 <alp/dsp.h> chain types, <alp/power.h>'s system-mode setter) reserve the right to change shape before v1.0 — pin to a specific SDK commit if you depend on them.

The full annotation policy + machinery live in docs/abi-stability.md.

Error model

Every alp_*_open() returns a handle or NULL. Every operation returns an alp_err_t integer (0 = ALP_OK). When a handle is NULL, alp_last_error() — a thread-local — tells you why:

ErrorMeaning
ALP_ERR_NOSUPPORTPeripheral / feature not implemented for this OS / SoM yet.
ALP_ERR_OUT_OF_RANGERequested config exceeds the SoC's documented caps.
ALP_ERR_NOT_READYChip not populated (ACK-probe failed) or rail not up.
ALP_ERR_TIMEOUTHardware did not respond within the configured window.
ALP_ERR_IOBus / link error (also: AEAD tag mismatch on decrypt).

ADR 0002 covers the full contract. The point of alp_last_error() is that hand-written firmware can diagnose NULL returns without changing the API shape — no out-parameters, no global state on a successful path.

Capability validation

Three layers protect against asking for hardware features that don't exist:

  • Build-time — the orchestrator cross-checks every entry in each cores.<id>.peripherals: against the SoC's metadata/socs/<vendor>/<family>/<part>.json capability profile scoped to that core. A board.yaml asking for i2s on a core that doesn't route I²S fails at west alp-build time with exit code 3, before any compile work. cores: keys are also cross-checked against the SoM preset's topology: block — a typo in a core ID fails the same gate.
  • Capability-keyed library bindings — each library that depends on a hardware accelerator declares its priority list with a requires_cap: matcher (e.g. ethos_u85_count). The loader reads each SoM preset's capabilities: block and emits the matching CONFIG_* for the highest-priority capability the SKU actually has. The Ethos-U55 → U85 split on Alif Ensemble (E4 / E6 / E8 carry both; E3 / E5 / E7 carry only U55) is driven entirely by this mechanism — application code never names the NPU.
  • Runtime<alp/soc_caps.h> is generated from the same metadata/socs/ JSON and consulted by *_open(). E.g. alp_adc_open with resolution_bits = 16 on a 12-bit SoC returns NULL and stamps ALP_ERR_OUT_OF_RANGE.

E1M portability bound

E1M_<CLASS>_COUNT macros in <alp/e1m_pinout.h> document the cross-SoM-portable instance count per peripheral class. An app that opens E1M_I2C0 and E1M_I2C1 (with the count below 2) works on every conformant SoM. ADR 0004.

If your app needs more instances than the portable bound, it stays correct on the SoMs that route them — it just isn't portable to ones that don't.

OS backend selection

board.yaml v2 declares the backend per on-die core under cores.<id>.os. The SDK ships three runtimes; one project can use all three simultaneously:

BackendSelected byImplementation lives inComments
Zephyros: zephyrsrc/zephyr/First-class on AEN M55 + V2N / iMX93 M-class peers. Zephyr v4.4.0 pinned.
Yoctoos: yoctosrc/yocto/First-class on V2N / V2N-M1 / iMX93 / AEN-E5..E8 A-class clusters. Uses i2c-dev, spidev, GPIO chardev v2, ALSA, OpenSSL, libmosquitto.
Bare-metalos: baremetalsrc/baremetal/Calls vendor HAL directly. Coverage lands per the test plan.
(skip core)os: offExplicit "leave this core dark in this project".

A bare som: { sku: <MPN> } with no cores: overrides inherits the SoM preset's topology: defaults — every heterogeneous SoM produces a working dual-image build out of the box.

Two consumer paths, same API

ADR 0005 records the SDK ↔ Studio boundary.

Hand-written

You write <alp/...> calls directly. Instance IDs come from <alp/e1m_pinout.h>. The build is west alp-build → west alp-image → west alp-flash (or west alp-build → west build for single-slice Zephyr projects).

alp-studio codegen

Studio reads its block library + your project document, runs a pin allocator against the active SoM, and emits C that calls the same <alp/...> API. Switching between paths is non-destructive — Studio's generated code coexists with hand-written code in the same project.

Orchestrator — scripts/alp_orchestrate.py

The orchestrator is the SDK's central tool in v0.6 — it replaced the single-slice alp_project.py loader when board.yaml schema v2 landed. Inputs:

  • board.yaml at the app root (schema v2)
  • metadata/e1m_modules/<MPN>.yaml (SoM preset — carries topology:, memory_map:, mailbox:, helper_firmware: blocks)
  • metadata/carriers/<carrier>.yaml (carrier preset)
  • metadata/socs/<vendor>/<family>/<part>.json (silicon capability profile; cores[].id set is the topology key set)
  • metadata/library-profiles/<lib>/ (compile-time tuning for libraries)

Generated artefacts (byte-stable across rebuilds):

PathWhat it carries
build/system-manifest.yamlPer-slice status, log paths, artefact paths, boot order.
build/generated/alp/system_ipc.hEndpoint IDs, addresses, mailbox channel macros — shared by all slices.
build/generated/dts-reservations.dtsireserved-memory: carve-outs shipped into Linux + Zephyr DTs.
build/<core>-zephyr/alp.confKconfig fragment layered onto each Zephyr slice's prj.conf.
build/<core>-yocto/conf/alp-generated.conflocal.conf snippet consumed by bitbake (MACHINE=…, IMAGE_INSTALL).

The system manifest is consumed downstream by west alp-image, west alp-flash, west alp-renode, and OTA.

West command split

v0.5's monolithic west alp-build was split in v0.6 into five focused commands under scripts/west_commands/:

CommandPurpose
west alp-buildFan board.yaml out into per-core slices, emit system-manifest.yaml.
west alp-imageConsume the manifest, assemble a flashable bundle (build/image-bundle/).
west alp-flashWalk the manifest's boot_order: and program each piece with the right backend.
west alp-cleanTear down per-slice build dirs idempotently.
west alp-renodeBoot the dual-OS image in Renode and drive the RPMsg handshake smoke test.

west alp-flash dispatches per artefact: vendor flasher for the SoC, openocd-via-SWD for the GD32 helper MCU on V2N, USB-CDC bootloader for the CC3501E on AEN. No developer-side tool-selection.

See the board.yaml reference and the heterogeneous builds walkthrough.

Hardware identification

Two-stage SoM-ID flow runs at boot on every Alp Lab module:

  1. EEPROM manifest — 128-byte block on the on-module 24C128 EEPROM carrying family / SKU / hw_rev / serial / mfg date. Read via alp_hw_info_read() from <alp/hw_info.h>.
  2. BOARD_ID ADC — per-rev resistor divider sampled at boot to confirm the running firmware matches the hardware revision.

The build-time companion header (<alp_hw_info_build.h>, emitted by the loader) bakes the customer's board.yaml identifiers into ALP_HW_BUILD_* macros so alp_hw_info_assert_matches_build() can fail fast if the wrong firmware is flashed onto a unit.

Production-test programs the manifest with scripts/program_eeprom.py.

Repository split

The SDK is intentionally distributed across several repositories so each can evolve on its own cadence:

RepoContains
alp-sdkPortable <alp/...> API, chip drivers, OS backends, metadata, examples.
e1m-specThe E1M open-standard pinout and mechanical spec (CC BY-SA 4.0).
alp-zephyr-modulesZephyr board files for the official ALP Lab EVKs.
alp-sdk-vscodeVS Code extension — schema-aware editing, GUI configurator, west wrappers.
alp-studioThe Studio codegen tool — emits <alp/...> calls from block manifests.

The split lets a customer pin alp-sdk and bump alp-zephyr-modules independently when the EVK schematic respins — and vice versa.

Architecture Decision Records

ADRs that explain why the SDK looks the way it does:

ADRTopic
0001Wrapper over Zephyr / Yocto, not reimplementation
0002alp_last_error() thread-local diagnostic
0003Twelve wrapped peripheral classes
0004E1M_<CLASS>_COUNT portability bound
0005SDK ↔ Studio boundary
0006Secure boot + secure OTA chain
0007Wave-2 DSP — pipeline stages, not standalone primitives
0008<alp/gpu2d.h> portable shim even for single-silicon
0009Mender Zephyr client deferred to v1.1; secure OTA on Zephyr cut from v0.4
0010Heterogeneous OS orchestration — Zephyr/Yocto/baremetal coexist per-core
Questions about this page? Discuss in Community Forum