Across both mining and manufacturing, the role of AI is changing. Early industrial deployments were celebrated simply for being automated or predictive. If a model could spot a pattern faster than a human, that was considered progress. As AI moves deeper into high stakes operations where decisions influence throughput, recovery, safety, and millions in revenue, expectations have matured. The new requirement is explainability.
Operators and engineers no longer want an AI system that only provides an answer. They want to understand why recommendations are being made, which variables are influencing them, and how those suggestions align with the constraints they see on the ground. They want AI systems that behave like partners, not oracles. Research in advanced manufacturing has shown that explainable AI significantly improves trust, adoption, and the ability for frontline teams to integrate AI decisions into daily execution (Amann et al., 2022). Work from the DARPA XAI program reinforces this point. When humans can see how an AI model reasons, they perform better, rely on the system more appropriately, and maintain important oversight (Gunning and Aha, 2019).
Black box models may perform well in controlled test environments, but in real operations they create predictable problems.
1. Operators stop trusting recommendations.
If a model advises increasing feed rate, changing a flotation target, or reshuffling a production schedule without showing its logic, operators ignore it. AI that cannot justify itself becomes background noise in the control room or on the shop floor.
2. Decision making becomes brittle.
When conditions shift, teams have no way to understand why the model changed its guidance, which inputs shifted, or whether they should override it. This is a major issue in environments that manage variable ore, unstable demand, or tight safety margins.
3. Organizations lose institutional knowledge.
If only the AI system understands why a decision is being made, the organization becomes dependent on a mechanism it does not control. In mining and manufacturing, where process knowledge is a strategic asset, this creates operational risk instead of value.
These limitations are not theoretical. They are a major reason why many early industrial AI projects stalled. Automation alone was not enough. Without clarity and justification, trust was never earned, and systems failed to move beyond pilot phases.
At NTWIST, our stance is clear. Black box industrial AI is not the future. It does not belong in high stakes processes that govern throughput, recovery, or production commitments. Our products are designed around three principles that make AI usable and trustworthy for operators.
1. Every recommendation must be traceable.
MillMax exposes its reasoning by showing what changed in the feed, which relationships matter, and how that translates into recommended setpoints. Operators can inspect the factors affecting P80, recovery, reagent use, or circuit risk. Transparency builds confidence in the guidance.
2. AI should enhance human expertise, not replace it.
DynaMax and OreMax present variability, confidence intervals, and modeled assumptions in a clear format. Geologists, metallurgists, and planners can validate, question, and adjust decisions instead of blindly following automated guidance.
3. Interpretability accelerates operational maturity.
Explainable AI helps teams understand root causes. They can see why throughput is unstable, why grade swings occur, or why a schedule fails under specific constraints. This insight helps strengthen planning, coordination, and decision making across entire sites.
Explainability is not a feature for us. It is the foundation of how our systems are designed. It ensures that AI fits within the complex human workflows inside mines and plants, instead of forcing operators to adapt to opaque algorithms.
Mining and manufacturing are entering an era where decisions must be made in real time and where feed, demand, and process conditions shift more frequently. Optimization targets now evolve with market conditions, and workflows span multiple teams and constraints. In this environment, AI must do more than produce outputs. It must justify them clearly.
Transparent AI supports safer operations, stronger regulatory alignment, faster onboarding of new operators, more stable throughput and recovery, higher trust in automated recommendations, and better governance of decisions and assumptions. It also ensures that organizations maintain control of their own knowledge and expertise.
This is the foundation NTWIST builds on. Our systems make the reasoning of AI visible so operators can understand what the model sees, why it is making a recommendation, and how that recommendation supports the goals of the site. Industrial AI is moving from black box to glass box. NTWIST is already there.
References