The AI Era Has a Manufacturing Problem. Here's What the Shop Floor Already Solved.
- Vera Fischer

- 20 hours ago
- 5 min read

A recent Siegel+Gale report on AI and branding describes what it calls "the new model" of governance: humans create direction, AI assists during creation, consistency is built in rather than bolted on, and governance happens during production rather than after.
I read that and smiled. Because that isn't a new model. It's Jidoka. And Sakichi Toyoda's automatic loom stopped itself when a thread broke in 1924.
Spend enough time around manufacturing and you start to notice something that should probably make the rest of the business world take a closer look: the frameworks currently being marketed as AI-era breakthroughs; build quality in rather than inspect it out, automate the mechanical work but hold humans at the decision points, standardize the system rather than the output, are principles manufacturing has been refining for a century.
And I think the organizations that recognize it first will have a real advantage in what comes next.
The Inspection-at-the-End Problem
In manufacturing, we learned the hard way that you cannot inspect quality in at the end of the line. By the time a defect reaches final inspection, you've already built it into fifty other units. You've wasted material, labor, and time. Worse, you've trained the line to accept that some level of defect is normal.
So we moved upstream. Statistical process control. In-line testing. The Japanese term "Poka-yoke" is mistake-proofing designed so the operation physically cannot proceed if something is wrong. The principle is simple: the cheapest defect to fix is the one that never gets made.
Now look at how most organizations are governing AI output today. A marketing team generates hundreds of pieces of content. A brand or legal team reviews a sample at the end. Problems get caught downstream, if at all. Inconsistencies slip through. What Siegel+Gale calls the "Sea of Sameness Slop" piles up.
That is end-of-line inspection. It has never scaled, in any industry, ever. There is no reason to believe it will scale here.
Jidoka, Reframed
Jidoka is often translated as "autonomation" or automation with a human touch. A machine runs on its own, but the moment it detects an abnormality, it stops. A human then diagnoses. The machine doesn't just produce faster; it produces smarter, because it has judgment about when to hand off to a person.
That is exactly the model the current AI guidance is converging on.
Omnicom is building tools to make data "machine-readable, prioritized, and trusted by AI agents."
Siegel+Gale argues humans should "define strategy and direction" while AI "applies that system consistently at scale."
Adobe GenStudio embeds brand parameters into creative workflows with automated reviews.
Strip the buzzwords and what you have is Jidoka. A system that runs autonomously within defined parameters and escalates to human judgment when something falls outside the tolerance band.
Manufacturers already know how to design those systems. We know how to define the tolerance bands. We know how to train operators to recognize drift before it becomes a defect. We know the difference between "abnormal but acceptable" and "stop the line."
Most organizations deploying AI do not know these things yet. They are going to have to learn them, because the alternative isn't going to hold.
The Variability Problem No One Wants to Name
Here is something AI vendors don't tend to say out loud: large language models produce regression to the mean. They are, statistically, machines for generating the most probable next token. Run one a thousand times and the outputs cluster around a statistical center of gravity.
Manufacturers recognize this instantly. It's a process capability problem. Your Process Capability Index may be high, but your target is wrong. You're producing consistently average output with low variance from the mean and low differentiation from everyone else's mean.
On a factory floor, when every competitor's process produces the same commodity to the same spec, margins compress to zero. The way you win is with either a superior process; cheaper, faster, more reliable or a genuinely differentiated product that the commodity process cannot make.
Marketing and communications are about to hit that same wall. When every brand's AI is trained on similar data and optimized against similar objectives, the output converges.
The companies that win will be the ones whose human inputs: the strategy, the taste, the proprietary data, the hard-won point of view, sit upstream of the model, shaping what it produces. Not the ones who ask the model to be distinctive on its own.
Manufacturing has known this for decades. Standardize the system. Differentiate the product. The logic doesn't change because the output is now a sentence instead of a sub-assembly.
Governance Is a Data Problem Before It Is a Policy Problem
One more translation worth making. The Omnicom work talks about "signal architecture" or building coherent, repeatable cues for both machines and humans and notes that models "seek structured, consistent data for pattern recognition" while "ambiguous signals lead to low visibility."
Any supply chain professional reading that is already nodding. This is the same problem we have been solving for years with master data management, ERP integration, traceability systems, and specification control. A part number means one thing across every system. A specification is documented, versioned, and auditable.
When the data is ambiguous, the physical process fails: wrong part, wrong quantity, wrong destination, wrong customer.
AI governance is the same discipline applied to a new substrate. Your brand, your values, your product truths, your compliance constraints; all of that has to be structured, versioned, and machine-readable. Not because it is elegant, but because ambiguous inputs produce unreliable outputs. We learned that with materials. We are now learning it again with language.
The irony, for anyone paying attention, is that the Siegel+Gale team tells a story in their own report about uploading a brief into an AI tool and having it falsely attribute "the go and see principle" to a client when the principle actually belongs to the Toyota Way.
The AI hallucinated a manufacturing concept into a context where it didn't belong. The cautionary tale is itself a manufacturing cautionary tale.
Why This Matters Beyond the Plant
If you have spent your career in operations, supply chain, or manufacturing, you have been trained in the exact discipline this moment requires: designing systems that produce consistent outcomes at scale, with quality embedded at the source, clear tolerance bands, and humans placed precisely where their judgment matters most.
That discipline is not locked inside the plant. Right now it is one of the most transferable skill sets in business, and most organizations underestimate how much they need it.
Marketing is learning that brand governance has to be built into production. Legal is learning that AI compliance has to be designed in, not bolted on. HR is learning that AI-assisted hiring needs the same bias-detection we would call statistical process control if it were on a line. Across every function, the same lesson is being relearned under a new name.
So the takeaway I'd offer, especially to people in my own field who may not realize how much leverage they hold in conversations about AI:
The AI era does not require us to invent a new playbook. It requires us to recognize that a playbook already exists and that the people who know it best have been working with their hands, on factory floors, in distribution centers, and across supply chains for a very long time.
Before your organization spends another quarter trying to derive AI governance from first principles, talk to someone who has spent twenty years making a line run clean.
They already know.



Comments