The Signal by NTWIST | Blog on AI & Operational Excellence

AI Mining Optimization: Mine-to-Mill Decisions

Written by NTWIST | 16-Apr-2026 7:47:32 AM

Every operating mine you visit has the same set of data collection and presentation dashboards. Despite this abundant data collection, massive gaps remain in understanding the operational status of the mine, the recovery rate, and processing variability.

Data is collected and presented, but at the end of the shift, the team still does not understand throughput variance and processing loss. Every mining company suffers from the same knowledge gap in the business caused by false perceptions regarding:

  • The mine
  • The production equipment
  • The operating processes
  • Positive gains delivered during 42% of operating hours.
  • 17% of off-spec operating hours were corrected thanks to system recommendations.
  • Block-by-block stockpile modelling
  • Grade probability assignments
  • Real-time short-term AI ore feed forecasting
  • OreMax and DynaMax: Manage stockpile intelligence.
  • PlanMax: Optimizes ore blending against plant constraints.
  • MillMax: Provides real-time optimization of mining circuits in the plant.
  • Nexus: Connects operational scheduling in manufacturing systems.
  • Stable throughput
  • Certainty
  • Less variability
  • Absolute trust in the system from the operational team

Processes are not the problem. The core problem is making the right decisions based on the information gaps that exist.

This is not meant to undermine the efforts of operational teams; they are highly skilled and experienced. The issue is systemic. The mining industry has invested heavily in improving data collection and processing technologies, yet they completely neglect the most important gap in the entire data and decision cycle.

Your problems are not about mine capacity. They are about the gaps in understanding the state of the operational mine and your decision-making capabilities. This applies to every part of the operational value chain.

The Silent Destruction of Ore Variability

Ore variability is one of those problems that does not announce itself. Instead of showing up in core empirical data like grinding energy, simulated predictions in reagent consumption, and variable recovery, it acts silently and shows up where it hurts the most.

 The problem compounds even more when you consider that the most common process across the board is reactionary. Sample assays come back post-process. By the time a shift in feed density, a grade, or a mineral species substitution is known, the damage is already done. Adjustments made are reactive, suboptimal, and the result of a lack of foresight.

Overcoming Challenges in HPAL Operations

This is particularly the case in High-Pressure Acid Leach (HPAL) operations. Adjusting and managing constantly shifting feed compositions is a big challenge. Managing the tolerable errors in acid and steam dosing in real time is equally difficult.

One operation that encountered a similar challenge noted a complete lack of real-time control and guidance. Because of that, operators depended on historical averages and manual intuition to manage dosing. This is a reasonable but highly imprecise approach.

The roll-out of an AI mining optimization software layer for the first time over an existing process—with minimal or no infrastructural changes—demonstrated real-time identification of optimal dosing under continuously changing conditions. The results were clear:

For HPAL operations, this improvement in stability directly correlates to savings in reagents, higher energy efficiency, and recovery improvements for nickel and cobalt.

The system required minimal but essential changes in the operators' workflow, making training simple. It provided an accurate and reliable depiction of what the process required in the moment, rather than a retroactive view from two hours prior. This is industrial AI demonstrating its true potential: not the replacement of human judgment, but its refinement.

Where Value Actually Leaks: Mine to Mill

To understand why most mines do not perform to their potential, we must answer one critical question: what causes observed value loss along the value chain?

  

Value loss is not concentrated in a single area. It scatters across the value chain, and more importantly, all areas interact with each other. At a mine, the main issue often relates to traceability.

Ore moves through the mine without a record of where it is going or why. The characteristics of materials coming out of the pit are averaged or obscured long before they reach the plant.

The Cost of Poor Traceability

One operation found that incorrect ore dumping—directing material to the wrong location—resulted in lost productivity costing approximately $500,000 annually due to negative downstream impacts. Once they implemented a block-level ROM pad management software solution with proper traceability, that cost dropped to $8,000 in just two months. The ore had not changed. The only thing that changed was routing discipline.

In another example, a mine had approximately 0.1 million tonnes of medium to high-grade ore sitting unaccounted for in a stockpile. The material was valued at roughly $11 million USD, but the mine did not know it was there.

This is not a geological failure; it is a digital twin integration failure. Without a detailed model of long-term stockpiles, operations are forced to make decisions based on averages, masking massive variabilities. That is an $11 million decision-making gap.

The Problem with Stockpile Averages

Moving to the stockpile layer only deepens the problem. Most ore stockpile management uses coarse averages: a singular grade estimation for thousands of tonnes of highly variable materials. Geologists and planners know this. However, the tools to model stockpiles at the access operational level were historically unavailable.

Modern tools now include:

When operations access that level of granularity, variability and blending optimization improve. Mill performance improves not because the ore improves, but because reclaim planning decisions improve.

Bringing Visibility to the Plant

Then there is the plant, where the invisible becomes visible and unexplained issues arise. One of the most important parameters for a concentrator is the grind size, P80. Most of the time, P80 is estimated sporadically or inferred through other parameters.

In a single nickel-copper concentrator, management discovered that the P80 target was only achieved 49% of the time. The estimated value of controlling grind size in this scenario is $3.4 million per year. Real-time optimization and connected upstream characterization in mining are exactly what close this gap.

The Change that Matters: From Monitoring to Decision-Making

Collecting data is a known strength in the mining industry. However, processing that data into actionable decisions in a timely manner is a notable weakness.

There is a massive gap between a monitoring tool and a decision-making tool. Monitoring tools tell you what happened. Decision-making tools tell you what to do, and can even act on your behalf without needing to be told. This is the difference between waiting in a control room to catch up versus controlling the process from ahead.

Instead of receiving a shift report showing a drop in recovery, an operator receives an in-shift recommendation to adjust reagent dosing because feed hardness has changed. The system predicts a recovery impact in the next two hours and explains its rationale, showing changes in the feed and the outcomes of previous similar adjustments.

Unified Intelligence with NTWIST

This is where the real power of connected mine-to-mill optimization thinking begins. The entire system acts consistently as one comprehensive intelligence. Planners can simulate scenarios before committing to them, testing reclaim sequences, shift patterns, blending strategies, and plant constraints.

This is exactly the architecture on which NTWIST’s MineMax platform is built:

All applications can function alone, but the real value lies in connecting them as part of a unified decision layer.

Data Driving Decisions: What Changes

The "clean data" often cited in technology pitches is usually too good to be true. Let’s clarify what actually changes when you applyAI-powered mining software the right way.

  

In one deployment, dynamic manufacturing scheduling and real-time production planning drove a 29% improvement in throughput. This didn't come from new equipment. It came from better job sequencing and resource allocation that eliminated time gaps silently consuming productive output. The plant hadn’t been broken; it had just been underutilizing its capability due to a scheduling layer that couldn’t handle real-world complexities.

In HPAL operations, the system achieved a nickel extraction prediction error of just 0.1%. This level of accuracy quickly gained the trust of metallurgists and operators, leading to consistent recovery, lower reagent overuse, and fewer downstream upsets.

Building Confidence in Stockpile-Heavy Operations

For stockpile-heavy operations, shifting to probabilistic block models produces something underrated but critically important: confidence.

Mine planners can now make better decisions. When a planner runs a reclaim scenario and sees both the expected grade and the uncertainty range around that estimate, decisions improve. Blending strategies are optimized, and penalty exceedances become avoidable rather than something to explain after the fact. This AI ROI compounds—every better decision makes the next one cheaper.

The ultimate results?

Industrial AI solutions do not eliminate operational complexity. Instead, they reduce the delta between any change and the appropriate response. The ideal result of AI in mining is profound energy efficiency driven by precise dosing, grinding, and circuiting corrections.

Final Thoughts

Not every detail in mining can be recorded as data points about secondary issues. The future of mining is not about developing additional technology just to collect more data. The future is about making technology-driven decisions quickly and efficiently.

Most mining operations are missing the most important part: decisions. The mining operation of the next decade is not about competing for the best ore, but rather about making the best decisions with the ore you have.

Book a demo with NTWIST to see how mine-to-mill decision intelligence applies to your specific operation—whether you're managing stockpile uncertainty, chasing throughput stability, or trying to close the gap between plan and actual.

Frequently Asked Questions

We mine for insights (pun intended). Below are answers to common questions mining teams have when exploring AI-empowered decision intelligence for the first time.

Q: What is mine-to-mill optimization, and how does it differ from traditional production monitoring?

A: Mine-to-mill optimization leverages data from all components in the value chain—from geological modelling and ROM pad management to stockpile planning and plant control. Traditional monitoring captures the past. Mine-to-mill optimization proactively captures data and predicts phases that haven't occurred yet, allowing the plant to operate on predictive intelligence rather than reactive intelligence.

Q: In what ways is mining software using AI different from traditional ERP and scheduling programs?

A: Traditional ERP systems manage the structured and predictable. AI mining optimization solutions are designed for real-world situations that an ERP cannot predict, such as continuous feed variability, non-linear circuit interactions, and probabilistic grade distributions. Dynamic AI scheduling software continuously adjusts as conditions change, unlike static scheduling tools.

Q: Why does ROM pad management software impact plant performance?

A: ROM pad management software provides block-level operability regarding grade, hardness, mineralogy, and penalty elements. This granularity directly influences the precision of feed predictions and blending decisions. When the mill understands the exact distribution and variability of incoming materials, it can set itself up to run optimally.

Q: What is a mining digital twin and what is its purpose?

A: A mining digital twin maintains a continually updated virtual model of operations, including stockpiles, circuits, and process parameters. It allows planners to run scenarios and test reclaim sequences or feed blend changes before making actual decisions. This drastically reduces the trial-and-error approach that causes excessive reagent use and poor recovery.

Q: How is real-time AI ore forecasting different from traditional lab assays?

A: Lab assays are accurate but notoriously delayed. Real-time AI ore forecasting uses sensors, cameras, and historical data to characterize ore the moment it enters the system. This bridges the gap between sample data and operational control, keeping operators ahead of system issues rather than responding post-factum.

Q: What does dynamic production scheduling mean in a mining context?

A: It refers to the ability to adapt production schedules to real-time changes, such as equipment availability, rush orders, or material delays. Instead of forcing teams to stick to a broken static plan, dynamic scheduling alters the plan in real-time to reduce disruptions and maintain targets without requiring planners to start from scratch.

Q: What are the potential returns on investment (ROI) from applying AI in mining?

A: Actual deployments yield massive value. ROM pad traceability improvements have eliminated $500,000 yearly mis-dumping costs. Probabilistic stockpile modelling has recovered millions in unaccounted ore and avoided up to $10 million in drilling costs. Real-time plant optimization frequently increases throughput and recovery by $1 to $5 million per year for well-instrumented sites.

Q: Is NTWIST’s platform useful for operations without advanced systems?

A: Yes. NTWIST’s applications start with basic available data and scale as systems mature. OreMax can produce models using periodic surveys, while MillMax can begin with historian extracts and progress to live DCS integration. The goal is immediate value without waiting for perfect data infrastructure.

Q: How does ore blending optimization software improve recovery and reduce costs?

A: It replaces intuition-driven reclaim decisions with quantitative feed plans that balance grade, hardness, and penalty elements against plant constraints. Operations that implement geometallurgical blending software consistently see reagent savings of up to 30% and recovery improvements of up to 1.6% simply from better upstream feed management.