Composable Firmware

What is Composable Embedded Firmware?

Composable embedded firmware is an approach to building embedded Linux systems where every component — kernel, drivers, middleware, and applications — is packaged as an independent, versioned container that can be composed into a complete system.

Instead of monolithic firmware images that are rebuilt from scratch for every change, composable firmware lets you:

  • Mix and match components like LEGO blocks
  • Update individual containers without touching the rest of the system
  • Roll back any component independently
  • Share and reuse firmware components across projects
  • Test containers in isolation before deploying to production

Why Pantavisor for Composable Firmware?

Pantavisor Linux is the lightweight container framework purpose-built for embedded systems. It enables composable firmware by treating every layer of your system as an independent LXC container:

Component Container Type Example
Kernel / BSP Base container linux-rpi-armv8
System services Platform container pv-alpine-connman
Middleware App container pv-pvr-sdk
User applications App container pvwificonnect, custom-app

With only a 1MB core and zero dependencies, Pantavisor runs on any Linux architecture: ARM, ARM64, x86, MIPS, RISC-V.

Step-by-Step: Creating Composable Firmware

Pantavisor follows a strict two-phase model:

Phase Tool Output
Build — produce a flashable initial image Yocto + meta-pantavisor .wic / .pvrexport.tgz
Maintain — update an already-running device PVR CLI OTA revisions on Pantahub

PVR cannot produce a flashable image. Yocto cannot push an OTA update to a running device. Each tool owns its phase; do not mix them.

1. Get a Pantavisor Image

A Pantavisor device starts life as a flashable image (.wic or equivalent) containing BSP + Pantavisor runtime + initial container composition baked into /trails/0/. Two ways to obtain that image:

Option A: Download a Pre-Built Image (Quick Start)

For supported boards (Raspberry Pi, etc.), download a ready-made image from docs.pantahub.com/initial-devices.

Option B: Build Your Own Image with Yocto (Production)

For custom hardware, build with the meta-pantavisor Yocto layer:

git clone https://github.com/pantavisor/meta-pantavisor.git
cd meta-pantavisor

# Build with KAS (recommended)
kas build kas/scarthgap.yaml:kas/machines/raspberrypi-armv8.yaml:kas/bsp-base.yaml

# Or directly with BitBake
source layers/poky/oe-init-build-env build
bitbake pantavisor-starter

Build output lands in build/tmp-scarthgap/deploy/images/{machine}/<image>.wic — a full bootable image with the BSP and every container in PVROOT_CONTAINERS_CORE (see Section 2).

See the meta-pantavisor docs for machine configs, PANTAVISOR_FEATURES, and multiconfig builds.

2. Compose Containers at Build Time (Yocto)

Container composition for the initial flashed image happens in Yocto via BitBake recipes. The pvroot-image class assembles them automatically into the /trails/0/ initial state.

How It Works

  1. Each container is a recipe that produces a .pvrexport.tgz artifact.
  2. The image recipe lists which containers to include via PVROOT_CONTAINERS_CORE.
  3. At build time, pvroot-image extracts and deploys each container into the rootfs at /trails/0/ using pvr deploy.

Example: pantavisor-starter.bb

This image recipe composes a starter firmware from several container recipes:

SUMMARY = "Starter Image for Pantavisor"
LICENSE = "MIT"

inherit image pvroot-image

# Core containers baked into the initial firmware image
PVROOT_CONTAINERS_CORE ?= "pv-pvr-sdk pv-alpine-connman pvwificonnect"

# The base BSP image to use
PVROOT_IMAGE_BSP ?= "core-image-minimal"

PVROOT_CONTAINERS_CORE lists container recipes that get deployed into the initial Pantavisor state. These containers are core infrastructure (always present), while PVROOT_CONTAINERS (without _CORE) lists optional containers that are bundled into a factory-packages directory for first-boot installation.

Example: Container Recipes

In meta-pantavisor, a container can come from three sources. All produce a .pvrexport.tgz that pvroot-image deploys into the firmware.

1. GitLab Package Registry (Pre-Built)

Download an already-packaged container from GitLab’s package registry. This is fastest and ideal for stable, published containers:

SUMMARY = "Alpine Linux + ConnMan platform container"
LICENSE = "CLOSED"

inherit pvrexport

BB_STRICT_CHECKSUM = "0"

PVCONT_NAME="os"

SRC_URI += "\
    https://gitlab.com/api/v4/projects/pantacor%2Fpv-platforms%2Falpine-connman/packages/generic/alpine-connman/${PV}/alpine-connman.${PV}.${DOCKER_ARCH}.tgz;name=os;subdir=${BPN}-${PV}/pvrrepo/.pvr \
    file://mdev.json \
"

The pvrexport class fetches the .tgz, unpacks it into a PVR repo, and exports a signed .pvrexport.tgz.

2. Built from Scratch (Yocto Image Recipe)

Build a container rootfs from scratch using Yocto packages. Full control over what goes inside:

SUMMARY = "Pantavisor WiFi Connect container"
LICENSE = "MIT"

inherit core-image container-pvrexport

IMAGE_BASENAME = "pvwificonnect"
IMAGE_FSTYPES = "pvrexportit"

# What gets installed inside the container
IMAGE_INSTALL += "busybox pvwificonnect-app"

# Container runtime configuration files
SRC_URI += "file://args.json \
            file://config.json \
            file://pvwificonnect-config \
"

# Group assignment (platform = system service)
PVR_APP_ADD_GROUP = "platform"

# Extra volume mounts
PVR_APP_ADD_EXTRA_ARGS += " \
    --volume ovl:/tmp:permanent \
"

The container-pvrexport class:

  • Builds a rootfs from IMAGE_INSTALL
  • Runs pvr app add --from ${IMAGE_ROOTFS} to package it
  • Embeds args.json and config.json for runtime behavior
  • Generates mdev.json for device-node permissions
  • Exports a signed .pvrexport.tgz artifact

3. From Docker (Existing Container Image)

Pull an existing Docker/OCI image and convert it to a Pantavisor container. Great for reusing upstream or third-party images:

SUMMARY = "Alpine D-Bus container from Docker"
LICENSE = "CLOSED"

inherit pvrexport

BB_STRICT_CHECKSUM = "0"

PVR_DOCKER_REF = "asac/alpine-dbus:latest"

PVR_APP_ADD_EXTRA_ARGS += " \
    --volume /var/pvr-volume-boot:boot \
    --volume /var/pvr-volume-revision:revision \
    --volume /var/pvr-volume-permanent:permanent \
"

SRC_URI += "file://${BPN}.args.json"

The pvrexport class runs pvr app add --from=${PVR_DOCKER_REF} with the specified Docker platform, then signs and exports the result.

Compose a Full System

All three recipe types can be mixed in the same image recipe:

inherit image pvroot-image

# Containers from all three sources
PVROOT_CONTAINERS_CORE ?= "\
    pv-alpine-connman      \\  # from GitLab registry
    pvwificonnect          \\  # built from scratch
    pv-alpine-dbus         \\  # from Docker
"

At build time, pvroot-image:

  1. Builds each container recipe (if needed)
  2. Extracts their .pvrexport.tgz artifacts
  3. Deploys them into /trails/0/ via pvr deploy
  4. Mixes in the BSP (pantavisor-bsp) automatically

Container-to-Container Communication

Containers can export and consume services through the pv-xconnect service mesh. A container declares exported services in services.json:

{
  "#spec": "service-manifest-xconnect@1",
  "services": [
    {"name": "my-service", "type": "unix", "socket": "/run/my.sock"}
  ]
}

Consumers declare dependencies in args.json via PV_SERVICES_REQUIRED.

Build the Composed Image

kas build kas/scarthgap.yaml:kas/machines/raspberrypi-armv8.yaml:kas/bsp-base.yaml

The resulting .wic or .pvrexport.tgz contains the fully composed firmware:

  • BSP (kernel + initramfs)
  • All PVROOT_CONTAINERS_CORE containers deployed into /trails/0/
  • Factory packages for optional containers

3. The Pantavisor State (Real Format)

Pantavisor stores the full system state as a JSON manifest. This is the actual format the runtime uses — not a fictional pvr.json:

{
  "#spec": "pantavisor-service-system@1",
  "device.json": {
    "groups": [
      {"name": "data",    "restart_policy": "system",   "status_goal": "MOUNTED"},
      {"name": "root",    "restart_policy": "system",   "status_goal": "STARTED"},
      {"name": "platform","restart_policy": "system",   "status_goal": "STARTED"},
      {"name": "app",     "restart_policy": "container","status_goal": "STARTED"}
    ],
    "volumes": {
      "pv--devmeta":  {"persistence": "permanent"},
      "pv--phconfig": {"persistence": "permanent"},
      "pv--usrmeta":  {"persistence": "permanent"}
    }
  },
  "bsp/kernel.img": "4186c915bc30071a1395fbe6ebe81e328fc9b9ee88d6c5af7d27291b20afcf89",
  "bsp/modules.squashfs": "7bb6ce5913ad5c14e14537d552ad0fba7952011e9135f724f9c38baee9b76e53",
  "bsp/pantavisor": "8a4d1dbd5ac2a09f475f633eca0a06c1747b8aa86c807a3096ba1b92fa95996e",
  "bsp/run.json": {
    "firmware": "firmware.squashfs",
    "initrd": "pantavisor",
    "linux": "kernel.img",
    "modules": "modules.squashfs"
  },
  "os/root.squashfs": "dffbfec7c077a5ab06737f2cec9917bae6dedb39b9151172e42c2a22a2a36475",
  "os/pvpkg.json": {
    "name": "os",
    "version": "v1.3.2-g013c1ad",
    "package_url": "https://gitlab.com/pantacor/pv-platforms/alpine-connman",
    "src_extra": {
      "args": {
        "PV_DRIVERS_OPTIONAL": ["wifi", "usbnet", "bluetooth"],
        "PV_GROUP": "root"
      }
    }
  },
  "os/run.json": {
    "group": "root",
    "name": "os",
    "root-volume": "root.squashfs",
    "type": "lxc"
  },
  "pvr-sdk/root.squashfs": "34cbeeba79a2291f5d2538b20673def986d14bead958f29b847325bd1ca842c2",
  "pvr-sdk/run.json": {
    "group": "platform",
    "name": "pvr-sdk",
    "restart_policy": "system",
    "roles": ["mgmt"],
    "root-volume": "root.squashfs",
    "type": "lxc"
  },
  "pvwificonnect/root.squashfs": "14aa4bd15e84fb7a739e724d4c8b18eb0319b5a8b88ac26d85f915ffc3932ede",
  "pvwificonnect/pvpkg.json": {
    "name": "pvwificonnect",
    "version": "v1.5.7-g355ad5a",
    "package_url": "https://gitlab.com/pantacor/pvwificonnect"
  },
  "pvwificonnect/run.json": {
    "group": "platform",
    "name": "pvwificonnect",
    "restart_policy": "system",
    "root-volume": "root.squashfs",
    "type": "lxc"
  }
}

Key parts of the state:

  • device.json — groups, volumes, disks, and global device configuration
  • bsp/ — kernel, initramfs (pantavisor), modules, firmware (hashed)
  • <container>/run.json — runtime config: group, restart policy, volumes, drivers
  • <container>/src.json — source metadata: Docker digest, args, persistence
  • <container>/pvpkg.json — package metadata: name, version, URL, description
  • _config/<container>/... — overlay configuration files (e.g. config.json)
  • _sigs/<container>.json — cryptographic signatures for each container

4. Flash the Image to the Device

Write the .wic from Section 1 (downloaded or Yocto-built) to the target boot media (SD card, eMMC reader, USB stick) using pvflasher — Pantacor’s flashing tool. It supports .wic, .bmap, and compressed images (.gz/.bz2/.xz/.zst/.zip), uses block maps for speed, and verifies checksums automatically. Works on Linux, macOS, and Windows.

Install:

# Linux / macOS
curl -fsSL https://raw.githubusercontent.com/pantavisor/pvflasher/main/scripts/install.sh | bash

# Windows (PowerShell)
powershell -c "irm https://raw.githubusercontent.com/pantavisor/pvflasher/main/scripts/install.ps1 | iex"

Flash:

# List candidate disks (confirm target before writing!)
pvflasher list

# Pre-built image
sudo pvflasher copy pantavisor-starter-raspberrypi-armv8.rootfs.wic.bz2 /dev/sdX

# Yocto build output
sudo pvflasher copy \
  build/tmp-scarthgap/deploy/images/raspberrypi-armv8/pantavisor-starter-raspberrypi-armv8.wic \
  /dev/sdX

Replace /dev/sdX with the actual target — flashing the wrong disk destroys data irreversibly. Run pvflasher with no args to launch the GUI instead.

See the Download & Flash guide for bmaptool/dd fallbacks and platform-specific notes.

Boot the device from the flashed media. On first boot Pantavisor:

  1. Brings up every container in the baked-in composition (no deploy step).
  2. Registers the device with Pantahub (if network is available).
  3. Assigns it a nickname (e.g. fleet_rpi_001) you can use in PVR URLs.

Find the new device in your Pantahub account, then continue to Section 5.

5. Maintain & Update with PVR (Post-Flash)

Once a device is flashed (Section 4) and online, PVR is the tool for ongoing OTA updates: adding, updating, or removing containers, tweaking configuration, and rolling back. PVR never builds a flashable image — it mutates the state of an already-running Pantavisor device.

Install PVR

curl -sL https://gitlab.com/pantacor/pvr/-/raw/master/install.sh | bash
pvr login   # one-time Pantahub authentication

Clone the Device State

# Via Pantahub (works from anywhere) — URL form: https://pvr.pantahub.com/<USER>/<DEVICE_NICK>
pvr clone https://pvr.pantahub.com/highercomve/fleet_rpi_001 my-device
cd my-device

# Or directly on LAN (device must run pvr-sdk container, port 12368)
pvr clone <DEVICE_IP> my-device

Modify the Composition

# Update one container to a new upstream version
pvr app update wificonnect --from=https://gitlab.com/pantacor/pvwificonnect:latest

# Add a new container
pvr app add monitoring --from=prom/node-exporter:latest

# Remove one
pvr app rm legacy-app

# Edit config files in _config/<container>/ directly with your editor

Commit and Push

pvr add .
pvr commit -m "Update wificonnect, add monitoring"

# (Optional) Sign parts before deploying
pvr sig add --parts=wificonnect,monitoring

# Push to device via Pantahub
pvr post https://pvr.pantahub.com/highercomve/fleet_rpi_001

The device pulls the new revision, mounts the new containers, and switches atomically. If any container fails its status_goal, Pantavisor reverts to the previous revision automatically — no manual rollback required.

Composable Firmware vs Traditional Approaches

Feature Yocto / Buildroot Balena Pantavisor
Composable architecture ❌ Monolithic ⚠️ App-only ✅ Full stack
Container runtime ❌ None ✅ Docker ✅ LXC (1MB)
Kernel as container ❌ No ❌ No ✅ Yes
Atomic OTA rollback ⚠️ Complex ✅ Yes ✅ Yes
Resource constrained ⚠️ Heavy ⚠️ Heavy ✅ 1MB core
Offline operation ✅ Yes ⚠️ Limited ✅ Full

Key Concepts for Composable Firmware

Container Lifecycle Management

Pantavisor manages each container’s lifecycle independently:

  • Start/stop/restart any container without affecting others
  • Health checks per container
  • Resource limits (CPU, memory, I/O) per container
  • Logs and debugging per container via pvcontrol

Versioning and Reproducibility

Every pvr commit + pvr post produces a new immutable trail step in Pantahub. Each step is content-addressed (SHA256) and reproducible — the same state JSON always yields the same firmware composition.

# Clone a specific historical revision (rollback)
pvr clone https://pvr.pantahub.com/highercomve/fleet_rpi_001/steps/<REV> rollback-ws
cd rollback-ws
pvr post https://pvr.pantahub.com/highercomve/fleet_rpi_001

Pantavisor itself handles atomic rollback at runtime: if a new revision fails to boot or its containers don’t reach their status_goal, the device automatically reverts to the last good revision. No manual intervention required.

Note: pvr is the update tool used after a device has been flashed with a Pantavisor image. To produce the initial flashable image, build with Yocto/meta-pantavisor (Section 1 above) or download a pre-built image from docs.pantahub.com/initial-devices. Full PVR CLI reference: docs.pantahub.com/pvr.

Over-the-Air Updates

Updates are differential: only changed object hashes are transferred to the device, minimizing bandwidth. See Section 5 above for the full clone → modify → commit → post workflow.

Example: Composable IoT Gateway

Here’s a real-world composable firmware for an industrial IoT gateway:

my-iot-gateway/
├── bsp/              # Raspberry Pi 4 BSP (kernel + firmware)
├── platform/         # Alpine Linux + ConnMan (networking)
├── tailscale/        # VPN mesh networking
├── modbus/           # Industrial protocol adapter
├── mqtt-broker/      # Local MQTT broker
├── edge-app/         # Custom edge processing application
└── pantavisor state  # System manifest (device.json + containers)

Each component can be:

  • Developed by a different team
  • Updated on its own schedule
  • Tested in isolation
  • Reused across multiple products

Next Steps