Demand Forecasting in Discrete Manufacturing Using Artificial Intelligence

It is a fact that discrete manufacturers invariably face the problem of planning, where demand is highly uncertain, the diversity of product is wide, the bill of materials (BOM) has a high complexity level, and there are long lead times involved. Statistical demand forecast models like ARIMA, exponential smoothing, and moving average models may not be appropriate in representing multidimensional demand dynamics. In this paper, we present a design for developing demand forecast models using artificial intelligence, based on open-source software only and the customer’s ERP system data alone.

Through this article, I present a practical design for implementing the replacement of rule-based or statistic-based forecasting models using a state-of-the-art AI stack based only on open source components and the customer’s ERP data.

Case for AI in Discrete Manufacturing Planning

Statistics works well if demand is constant and seasonal. The nature of discrete manufacturing makes such scenarios rare. An average medium-size enterprise faces:

  • Intermittent demand in long-tail products with limited ordering records
  • Product substitution and cannibalization within a family of products
  • Price and promotion sensitivity leading to sudden changes in demand
  • Bottlenecks in production due to complexity of materials, bill of materials, and lead times

Today’s machine learning algorithms, especially time series models based on attention mechanisms, recognize all these factors from historical data, consider a large number of variables, and generate probabilistic forecasts.

System Architecture

The pipeline consists of five distinct but independent layers. Each of them can be upgraded separately from the rest, allowing for flexibility as the machine learning environment changes.


Demand Forecasting - System Architecture Fig 1.

Data Ingestion

These three data domains are non-negotiable: transactional sales history, inventory status, and product master characteristics. The SQL Alchemy adapter converts the ERP schema into neat DataFrames suitable for the feature pipeline.

from sqlalchemy import create_engine, text
import pandas as pd

class ERPConnector:
    def __init__(self, conn_str: str):
        self.engine = create_engine(conn_str)

    def pull_sales(self, lookback_days=730) -> pd.DataFrame:
        return pd.read_sql(text("""
            SELECT item_id AS sku, plant_code AS location,
                   CAST(order_date AS DATE) AS date,
                   SUM(qty_confirmed) AS demand_units
            FROM   sales_order_items
            WHERE  order_date >= NOW() - INTERVAL :d DAY
              AND  order_type NOT IN ('SAMPLE','RETURN')
            GROUP  BY 1,2,3
        """), self.engine, params={"d": lookback_days})

    def pull_inventory(self) -> pd.DataFrame:
        return pd.read_sql(text("""
            SELECT sku, location, unrestricted_stock,
                   in_transit_qty, safety_stock_qty, days_of_supply
            FROM   inventory_master WHERE deletion_flag = 0
        """), self.engine)

Feature Engineering

The feature layer embeds domain-specific knowledge of manufacturing through direct representation of BOM intricacy, fill rate history, and demand intermittency. Critical feature categories include lagged features (T-7, T-14, T-28), rolling averages, inventory stress ratios, BOM depth, and calendar encodings. The Polars pipeline facilitates this efficiently:

import polars as pl

def build_features(df: pl.DataFrame) -> pl.DataFrame:
    G = ["sku", "location"]
    return (
        df.sort(G + ["date"])
        .with_columns([
            pl.col("demand_units").shift(7).over(G).alias("lag_7d"),
            pl.col("demand_units").shift(14).over(G).alias("lag_14d"),
            pl.col("demand_units").shift(28).over(G).alias("lag_28d")
        ])
    )

Model – Temporal Fusion Transformer (TFT)

The Temporal Fusion Transformer (TFT) is the best-performing model for probabilistic forecasting over multiple horizons. It inherently supports future-known variables such as promotions and holidays, SKU metadata that remains static over time, and provides attention weights for interpretability.

from pytorch_forecasting import TemporalFusionTransformer, TimeSeriesDataSet, QuantileLoss
from pytorch_forecasting.data import GroupNormalizer
import lightning as L

ds = TimeSeriesDataSet(
    df_train,
    time_idx="time_idx", target="demand_units",
    group_ids=["sku", "location"],
    max_encoder_length=182, max_prediction_length=28,
    static_categoricals=["product_family", "bom_tier"],
    time_varying_known_reals=["price_index", "promo_flag"],
    time_varying_unknown_reals=["demand_units", "lag_7d", "lag_28d", "cv_13w"],
    target_normalizer=GroupNormalizer(groups=["sku", "location"]),
)

model = TemporalFusionTransformer.from_dataset(
    ds, learning_rate=3e-3, hidden_size=64,
    attention_head_size=4, dropout=0.15,
    loss=QuantileLoss(quantiles=[0.1, 0.5, 0.8, 0.95]),
)
L.Trainer(max_epochs=50, accelerator="gpu", gradient_clip_val=0.1).fit(
    model, train_dataloaders=train_loader, val_dataloaders=val_loader
)

Forecast API

The forecast model is made available using the FastAPI framework. Any consumers of the forecasts, such as S&OP dashboards, replenishment calculations, and ERP write-back, access a single RESTful API that returns probabilistic intervals at P10, P50, P80, and P95.

from fastapi import FastAPI
from pydantic import BaseModel
import torch, mlflow

app = FastAPI()
model = mlflow.pytorch.load_model("models:/tft-demand/Production")
model.eval()


class ForecastRequest(BaseModel):
    sku: str; location: str; horizon_days: int = 28

@app.post("/v1/forecast")
async def forecast(req: ForecastRequest):
    ctx = load_encoder_context(req.sku, req.location)
    with torch.no_grad():
        preds = model.predict(ctx, mode="quantiles", return_index=True)
    return {"sku": req.sku, "location": req.location,
            "forecasts": format_quantile_output(preds)}

Inventory Planning and Drift Detection

Probability models directly inform the determination of dynamic safety stocks and reorder points, based on realistic assumptions of uncertainty, rather than generic rules applied to past average data. Demand profiles change due to product lifecycle stages, customer profile dynamics, and supply chain interruptions. Constant monitoring using AI automatically triggers retraining if the feature distribution deviates by a certain threshold.

Build vs. Buy

The choice between building and buying is never an easy one. Although commercial APS systems perform excellently in off-the-shelf workflow design, they have considerable drawbacks for manufacturers who seek transparency and control, or may have nuanced process requirements. Your custom AI pipeline, created using open-source technologies, can go live within 8-16 weeks at just 1/10th-1/20th the cost and provide total transparency into all forecasts.

An ideal planning tool isn’t necessarily the one with most features; but the one where your team understands every inference it produces.

Conclusion

AI-driven demand forecasting is available to any discrete manufacturer with good ERP data and a commitment to investing in custom solution development. The architecture described above, starting with domain-specific feature engineering, followed by probabilistic forecasting based on TFT model and drift detection, is intended for an incremental implementation, with benefits being achieved at every step. What never changes is the underlying principle of the entire process: encoding domain knowledge into features, dealing with uncertainty, and treating a machine learning model as a dynamic entity that evolves with the business itself.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.