Skip to main content

E1M-X V2N-M1

The E1M-X V2N-M1 combines the Renesas RZ/V2N processor with the on-module DEEPX DX-M1 NPU to deliver 25 TOPS of dense inference performance in the same 45 × 65 mm form factor as the base V2N.

Module variants

SKUMemoryStatus
E1M-V2M10132 Gbit LPDDR4X + 32 Gbit eMMC + DX-M1production
E1M-V2M10264 Gbit LPDDR4X + 64 Gbit eMMC + DX-M1production

At a glance

ParameterValue
Application coreQuad Arm Cortex-A55 @ 1.8 GHz (RZ/V2N)
Real-time coreArm Cortex-M33 (RZ/V2N)
AI accelerator 1Renesas DRP-AI3 (4 TOPS dense)
AI accelerator 2DEEPX DX-M1 (25 TOPS @ 1.0 GHz)
Total AI throughput4 + 25 TOPS (dense)
Module dimensions65 × 45 × 5 mm
Form factorE1M-X (45 × 65 mm), LGA
OS targetsYocto Linux
Indicative price$179

What's added vs the V2N base

V2N-M1 inherits the full V2N base module (see V2N) and adds:

ComponentWhere + how
DEEPX DX-M1 NPUOn-module, PCIe
M1_RESETRenesas-side GPIO controlling DX-M1 reset (active-low)
2 × PI3DBS12212A muxesSwitch PCIe routing between DEEPX and the E1M edge
0.75 V DEEPX railDA9292 CH2 (disabled on V2N base; brought up by FW on M1)
3 × TPS628640 bucksDDR5/LPDDR rails for DEEPX (0x44 / 0x48 / 0x4F)

Dual-accelerator architecture

AcceleratorPerformanceStrengths
DRP-AI34 TOPS denseLow-latency, tightly coupled to the ISP
DEEPX DX-M125 TOPS denseHigh-throughput, large-model support

Workloads can be partitioned across both accelerators. A vision pipeline might run a lightweight detection model on the DRP-AI3 while a heavier classification or segmentation model runs on the DEEPX side.

DEEPX bring-up

Host firmware (or the kernel) must run a four-step sequence after the Renesas side boots and before the Linux kernel opens the PCIe device:

  1. Enable the 0.75 V DEEPX rail via the secondary PMIC's CH2.
  2. ACK-probe the three DEEPX TPS628640 instances at 0x44 / 0x48 / 0x4F to confirm population.
  3. Route the PCIe muxes to the DEEPX path with the PI3DBS12212A driver.
  4. Release M1_RESET on Renesas PA6 (active-low).

The chips/deepx_dxm1/ driver wraps steps 3-4 into a single deepx_dxm1_bring_up(&ctx, DEEPX_DXM1_DEFAULT_BOOT_US) call. Walk-through with code: docs/bring-up-v2n-m1.md.

DEEPX userland

The DEEPX silicon's userland API (libdxrt.so) is upstream at github.com/DEEPX-AI/dx_rt. The Yocto layer that brings it into your image is wired in meta-alp-sdk/conf/machine/e1m-v2m101-a55.conf and references github.com/DEEPX-AI/meta-deepx-m1.

ONNX models are converted to the .dxnn format using the DEEPX compiler; the workflow is model-family agnostic.

Drop-in upgrade from V2N

V2N-M1 shares the same E1M-X pinout and physical dimensions as the V2N base. Existing carrier board designs work without modification. The only change required is software:

  • Set cores.a55_cluster.inference: { backend: deepx_dx } in your board.yaml v2.
  • Ship a Yocto image built against e1m-v2m101-a55.conf (includes the DEEPX kernel driver and userland).

Getting started

  1. Mount the E1M-X V2N-M1 on a compatible carrier.
  2. Flash the V2N-M1 Yocto image (DEEPX driver + libdxrt.so included).
  3. For M33 development, drop a board.yaml with som.sku: E1M-V2M101 and follow the Quick Start.

Resources

Questions about this page? Discuss in Community Forum