Prescription Allergy Medicine Prediction Algorithm

Most headlines these days gush about Large Language Models (LLMs)—what ChatGPT, Claude, or the latest Chinese AI giant can do. They’re incredible tools. They’re also pricey to run and, at times, a bit of a loose cannon. That’s why a lot of practical, measurable value still comes from traditional supervised learning. These models are simpler, cheaper, and trained on the data you already own. They don’t try to be everything—they do one job well.

A large pharmaceutical company came to Subsense with a clear problem: demand for its prescription allergy medicine was all over the place. Allergy demand is brutally seasonal—it shifts with rainfall, pollen levels, and a dozen other environmental factors. The result was predictable chaos: some months they were short and lost revenue; other months they overproduced and ate the costs.

We designed a two-horizon solution: short-term and long-term demand prediction. The idea was simple—give planning two lenses. One lens looks a year out for capacity and procurement decisions. The other looks three months out for production scheduling and inventory moves.

We trained the models on:

  • Historic production and sales for different allergy meds (birch, hay fever, etc.).
  • Observed weather across the main market (actuals, not historical forecasts—the “easy” datasets are often the wrong ones).
  • Pollen readings from 800 locations nationwide.
  • Google search trends for symptoms like “runny nose” and “red eyes.”
  • Reported allergy cases by location over the last 3 years.

A long-term (12-month) forecast used SARIMA (Seasonal ARIMA) and delivered 75%+ accuracy. A rolling 3-month model built with XGBoost hit 85%+ accuracy. Together, they boosted demand-prediction quality by ~30%. No LLMs, no AI agents—just two focused algorithms that paid for themselves.

A few details that mattered:

  • Data, not vibes. We aligned timestamps across sources (weather, pollen, searches, sales) and avoided “leakage” by only using information that would have been known at prediction time.
  • Right features, right horizon. The long-term model leaned on seasonal patterns; the short-term model used fresher signals like recent pollen spikes and search behavior.
  • Backtesting that doesn’t lie. We validated on multiple years and regions to make sure we weren’t overfitting to a single nasty spring.
  • Human-in-the-loop guardrails. Planners could override with context (e.g., a supplier outage), but overrides were logged and compared to model outcomes to keep everyone honest.

What this meant for the business: fewer stockouts, less waste, smoother production, and more predictable margins. Planning stopped being a monthly fire drill and turned into a data-driven routine.

If you think your processes could benefit from applied AI—especially the kind that ships and sticks—reach out.