Designing a Production Quality System: Tying Test Stations to a Single Audit-Ready Record

Topics: ISO 9001, quality systems, manufacturing, software-supported QMS

Tom Wade

2/19/20265 min read

When Resensys went through ISO 9001 certification, I helped to built the data infrastructure underneath it: a MySQL backend, a Streamlit web interface, and a custom API enforcing strict standards on every outgoing assembly. I was a major contributor to the certification effort — owning significant pieces of process design and documentation, and participating in internal audits. Here's the architecture, the tradeoffs of building rather than buying a QMS tool, and what I learned about quality systems in practice rather than in theory.

The starting state

Before the certification effort, our quality records lived in a mix of spreadsheets, email threads, and operator memory. When something went out the door, the record of what tests it had passed, what its calibration values were, and what its production history looked like was scattered across half a dozen places. This worked when we were small. As we scaled into a larger facility and our customer base grew — including industrial customers and DOTs who care about traceability — it stopped working.

ISO 9001:2015 is, in essence, a framework for ensuring that quality is built into your processes systematically rather than left to individual heroics. The standard doesn't tell you exactly how to do it; it tells you what your system has to demonstrate. You need documented processes, you need records that show the processes are being followed, you need a way to handle non-conformances, you need internal audits to verify the system is working, and you need continuous improvement.

The hardest thing about ISO 9001, in my experience, isn't the standard itself — it's that compliance requires a level of operational discipline that many small companies don't have until they actively build it. Going through certification is, more than anything, an exercise in turning informal practices into formal ones.

Why I built rather than bought

Off-the-shelf QMS tools exist. Some of them are very good. We considered several and ultimately decided to build a custom solution for a few reasons:

Most off-the-shelf QMS tools are built for industries we weren't in — pharmaceuticals, aerospace, food. The vocabulary, the workflow assumptions, and the regulatory mappings didn't fit our hardware-manufacturing context cleanly. We could have configured them, but the configuration cost was significant.

Our test data lived in our own production systems, behind our own APIs and behind our own access controls. Integrating an external QMS tool would have meant building custom integrations both ways. At our scale, that integration work was comparable in cost to just building the QMS layer ourselves.

We also had a clearer sense of what good looked like for our specific context than any off-the-shelf vendor would. Our QC engineers knew exactly what records they needed, what reporting they wanted, and what the failure modes of any system would be. That domain knowledge was more easily expressed in a system we designed ourselves than in one we configured to a vendor's framework.

A note of caution: “build it ourselves” is the wrong answer for most companies. We could justify it because we already had the engineering capability, the test data integration was tractable, and the off-the-shelf options didn't fit. If any of those things hadn't been true, we'd have bought.

The architecture

The system has three layers.

At the data layer, a MySQL database stores the records of every test and inspection performed on every assembly. Each record links to the assembly's traceability information — serial number, build batch, components used — and to the test station and operator who performed the test. The schema enforces a one-to-many relationship between assemblies and test results, which means an assembly's complete history is reconstructable by query.

At the API layer, a custom REST API mediates between the production test stations and the database. Test stations don't write directly to the database; they post results through the API, which validates them, applies access control, and writes them with the appropriate audit metadata. This API also enforces the strict outgoing-assembly standards that the QMS depends on — if a test result fails or is missing, the API returns an error and the assembly is flagged for review rather than allowed to proceed.

At the interface layer, a Streamlit web application gives operations and quality personnel live visibility into the quality system. They can see the status of any assembly in production, browse historical results, view non-conformance trends, and access the records auditors will eventually want to see. The interface is opinionated — it shows the views we needed, not every conceivable view — which makes it more useful than a general-purpose database client would be.

What “strict outgoing-assembly standards” means in practice

Every assembly that ships has to pass a defined set of tests. The standards aren't aspirational; they're enforced. The QC system doesn't let an assembly with a missing or failing test marked as “ready to ship.” This sounds simple but is the whole game: many quality failures in manufacturing come from incomplete tests being treated as “good enough,” and the only way to prevent that is to make incomplete tests structurally impossible to mark as complete.

The other thing the system does well is non-conformance handling. When a test fails, the system records the failure and ties it to the assembly. The assembly can then be reworked, re-tested, or rejected, and the record of what happened is preserved. Auditors love this kind of structured record-keeping — it's the difference between “we handle problems” and “we document how we handle problems.”

My role in the certification effort

I want to be precise about my role because precision matters in quality contexts. I was a major contributor to the ISO 9001 certification effort, not the sole owner. The overall effort was led by the broader team, with multiple people responsible for different parts of the system. My contributions were primarily in the documentation and process design areas — authoring procedures, defining process flows, and structuring records for audit readiness — along with building the technical infrastructure described above.

I participated in internal audits as part of the certification cycle. Internal audits are how you verify your own QMS is working before an external auditor does it for you. Going through them was, frankly, one of the most educational parts of the certification work. You learn quickly which processes hold up to scrutiny and which ones reveal gaps. The gaps you find are gifts — they're the things you can fix before the certification audit rather than discovering during it.

We are currently ISO 9001 certified. The system has held up under audit, the records the system produces are the kind auditors find satisfying, and the day-to-day operations of the QC team have improved measurably.

What ISO 9001 actually changes

Before the certification, quality was something we tried to do well. After certification, quality is something we systematically demonstrate we are doing well. The difference is more profound than it sounds.

When you have to document your processes formally, you find the parts of your processes that don't actually make sense. When you have to keep records, you find the failures you used to handle informally. When you have to do internal audits, you find the assumptions your team has made that turn out not to be true. None of this is comfortable, but all of it makes the system better.

The other thing ISO 9001 does is build customer confidence. Industrial customers, DOTs, and regulated buyers care about certification because it tells them you have a system, not just a hope. That signal is, for some customers, a precondition for doing business at all. Earning the certification opens commercial doors that would otherwise be closed.

What I'd tell someone going through this

If you're approaching ISO 9001 for the first time, a few things I'd suggest:

Start with the processes you actually have, not the ones you wish you had. The certification process is about formalizing reality, not about pretending you have a more sophisticated operation than you do. Start by writing down what you actually do; the gaps you find while writing are the things to fix.

Build the records into the workflow, not on top of it. If keeping records requires a separate step that someone has to remember, the records won't be kept consistently. If keeping records is the workflow — the test station emits the record automatically, the API enforces completeness, the system flags missing data — the records will be there when you need them.

Take internal audits seriously. They're the most useful thing in the entire system. Treat them as gifts, not as theater.