Multiplexorama: Bridging Hardware, Firmware, and Software in Mass Manufacturing

Designing a stateful flash programming pipeline with MUXORAMA, KiCad, Flask, MySQL, and Redis.

3/19/20257 min read

Multiplexorama is a custom PCB I designed in KiCad that, on the surface, does something boring: it shares a single debugger interface across multiple programming targets so we can flash several sub-assemblies at once. That description is true and also misses the point. The reason I built it wasn't to flash boards faster — although it does that, and we needed it to. The reason was to demonstrate a philosophy of manufacturing I've come to call Stateful Production, which borrows its frame from how physicists think about systems. This is a long post, because the project is the smaller, more interesting story inside a larger argument about how hardware companies should think about what production actually is.

The argument first

Stateful Production is the idea that a manufacturing line should be designed the way a physicist designs an experiment — around three things. First, a complete state definition: at any moment, you should be able to say what the full configuration of any unit on the line is, with no implicit context required. Second, a way to measure that state: instruments, fixtures, software, all of which exist to make state observable, not just to perform individual operations. Third, an equation of state: a clear, testable model for how units move from one state to another, so that the line is a sequence of well-defined transitions rather than a sequence of "and then someone does the thing."

Most assembly lines, including most of ours before this work, are not stateful in this sense. They're sequences of operations that produce records as a side effect. A device gets assembled. It gets flashed. It gets tested. It gets boxed. The state of any individual unit is implied by where it physically is on the line and by whatever the operator remembers, with the database catching up later. This works well enough at small scales. It does not scale to high-throughput production, and it does not give you the kind of visibility that lets you debug a problem two weeks after a unit has shipped.

Multiplexorama is a piece of hardware that I built to make this argument concrete in the smallest possible footprint.

The boring layer: what the hardware does

At the physical level, Multiplexorama is a PCB with a single debugger header on one side and a fan-out of programming connectors on the other, with a small array of multiplexer chips in the middle and a microcontroller orchestrating which target the debugger talks to at any given moment. The point of the board is that one expensive debugger and one operator can flash up to [PLACEHOLDER: N] sub-assemblies in a single sitting, with the host software cycling through targets, programming each one, verifying each one, and recording the result.

There's nothing exotic in the design itself. KiCad for schematic capture and layout. Standard analog multiplexer parts. A mid-range microcontroller as the dispatcher. Through-hole connectors chosen to match the existing assembly's programming pinout so the board could slot into the existing fixture without rework. The bill of materials was deliberately mundane; this isn't a board where exotic parts would help.

What makes the board interesting isn't the silicon. It's what the host software is doing while the silicon is busy.

The interesting layer: what each flash actually does

When the operator presses go and Multiplexorama starts cycling through its targets, each "flash" is far more than a flash. It's an atomic state transition for a unit, executed as a coordinated sequence across Flask, MySQL, and Redis on the host side.

Concretely, when target N is being programmed, the host software is doing the following, all stitched into a single workflow:

  • Device-type assignment. The firmware image being loaded determines the device type. The host records this against the unit's serial number in the QC database via the MongoQC API.

  • Routing parameter generation. Each unit needs a unique local address for its wireless protocol. The host pulls the next available address from a Redis-backed allocator, writes it into the firmware image as part of the flash, and records it as the canonical routing identity for the unit.

  • Site ID assignment. If the unit is being built for a known deployment, its site ID is set as part of the flash. If it's stock, a holding site is assigned. Either way, the assignment is explicit, not inferred later.

  • QMS initialization. A QC project entry is created (or updated) for the unit. The required tests for its device type are pre-populated. The unit's record exists in the quality system before it has been tested for the first time, which means subsequent tests update an existing structure rather than creating one.

  • Cache population. The unit's initial state is pushed into Redis: device type, routing parameters, site ID, last-flashed timestamp, expected first-report quantities. The fleet cache knows about the unit before the unit has reported anything. (See the Redis post for why this matters.)

  • Audit logging. Every step is logged against the unit's history with timestamps, the operator's identity, and the firmware image's hash. The history is append-only — the same discipline as MongoQC's history endpoint — so the record of how a unit entered the system is permanent and tamper-resistant.

What used to be a programming step is now a state transition that stitches together hardware, firmware, software, and process. The unit goes from "physically assembled but logically nonexistent" to "fully provisioned, fully cached, fully audited, ready to be tested" in a single coordinated operation.

Why the board matters more than its throughput

The throughput numbers are real. We can flash and provision [PLACEHOLDER: number] sub-assemblies in the time we used to flash one. That's the immediate justification for the project, and it's enough on its own to have made the work worthwhile.

But the deeper reason the board matters is what it forced. Before Multiplexorama, the steps above existed but happened at different times, in different systems, with different operators and different opportunities for inconsistency. The unit got flashed at one station, then registered later at another, then provisioned in the cache after it started reporting, then audited eventually. There were gaps everywhere — moments when a unit existed in one system and not another, moments when an operator forgot a step, moments when the human-in-the-loop was the only thing holding two halves of the process together.

Multiplexorama collapsed all of that into a single transactional moment. The board flashes the firmware and, in the same atomic operation, every system that needs to know about the unit is told about it. Either the whole transition succeeds or none of it does. The unit either fully exists in the system or it doesn't exist at all. There is no intermediate state in which a unit is half-provisioned, and there is no human-driven catch-up step that has to happen later.

That's what stateful production actually means in practice. The unit's state is always defined. The transitions between states are atomic. The systems that care about the state are kept consistent by design rather than by discipline.

Designing the board with the software in mind

The hardware design choices were driven by the software workflow more than by the immediate flashing problem.

The microcontroller on the board is bigger than it strictly needs to be for multiplexer dispatch, because it also handles target identification — when a sub-assembly is plugged into channel 3, the board reads back identifying information from the assembly itself before flashing, so the host software has independent confirmation of which physical unit is on which channel. This is the kind of thing that sounds redundant until the first time an operator plugs assemblies in out of order; with the readback, the host doesn't care, because it knows what it's looking at.

The connectors were chosen to fail safely. A misseated connection produces no programming activity rather than ambiguous results. The host software treats "target didn't respond" as a clean failure that retries cleanly, rather than a corrupt state that needs human cleanup.

The board's status LEDs are wired to the microcontroller's GPIO and reflect what the host software thinks the state of each channel is — armed, programming, verified, failed. The operator has unambiguous visual confirmation of where each sub-assembly is in the workflow without having to look at the host screen, which matters when you're running [PLACEHOLDER: N] units in a session and you want to glance at the rack to see if anything has stuck.

All of these choices are small. They add up to a board that's designed to be the physical anchor of a stateful workflow, not just a way to share a debugger.

Why this generalizes

I want to be careful not to over-claim. Multiplexorama is one piece of hardware in one company's manufacturing line. It's not a movement.

But I think the underlying idea — that you can design hardware to be the anchor for atomic state transitions in production, with the software stack stitching together every system that cares about the state — is exportable to a lot of other electronics manufacturing contexts. The toolchain we used (Flask, MySQL, Redis, KiCad, off-the-shelf microcontrollers) is what's available to any small-to-medium U.S. manufacturer. Nothing here required a contract with a specialized industrial automation vendor. The whole thing can be built, maintained, and modified by a small engineering team, which is exactly what makes it relevant for the kind of manufacturing the U.S. is trying to grow.

The alternative — keep doing things the way they're usually done, with fragmented systems and human-driven consistency — is fine at small scales and brittle at large ones. If you want a manufacturing line that scales without losing the thread of where every unit is and how it got there, you have to design for state explicitly, and you have to design the hardware as part of that, not as separate from it.

What I'd do differently

I'd version-control the hardware and the software together more deliberately from the start. The board has gone through [PLACEHOLDER: number] revisions and the host software has tracked along, but the relationship between board revision and software version was implicit for too long. Treating them as a single artifact in a single repo, with releases that name both, would have prevented some confusion.

I'd also build a simulation harness for the host software that doesn't require the physical board. Most of the state-transition logic can be exercised without any silicon at all, and being able to run those tests in CI rather than on a bench would have caught a few regressions earlier than they were caught.

Closing

Multiplexorama is, in the end, not really about flashing boards. It's about what becomes possible when you decide that production is a stateful process and you design the tooling — hardware, firmware, software, all of it — to make that statement true. The throughput improvement was the visible result. The shift in how we think about manufacturing was the actual one.