Skip to content
All posts

Why Data Architecture Still Breaks in the Push for Aquaculture Intelligence

Blog-Manolin-Tech-Stack-DarkBlue

Most salmon farming companies don’t lack software, they lack a stack that behaves like one system.

Over the last few years, farms have added more tools than ever: feeding platforms, lice cameras, environmental sensors, ERP and inventory systems, vet notes, lab results, and Power BI dashboards. The footprint and the expectation grew.

And yet, in many organizations, the same questions still trigger the same scramble. Not because teams aren’t capable, but because the architecture underneath the tools doesn’t support answers forward. It supports reconstruction.

That gap matters more in 2026 than it did in 2023, because the industry is moving toward tighter operating windows, tighter compliance expectations, and higher pressure to standardize performance across sites. When biology sets the deadline, a stack that requires repeated reconstruction becomes a bottleneck.

 

The Reality: A lot of Farms Choose to Build

A pattern is emerging across salmon farming: CTOs and technical teams are choosing to assemble something that looks like a modern enterprise data platform. It usually starts with a rational goal: centralize data, standardize it, and give the business one source of truth. Then the vendor list grows:

  • A data warehouse or lakehouse
  • An integration tool
  • A BI layer
  • A modeling environment
  • Reporting workflows for audits and internal reviews

Sometimes farms do this with external partners. Often they do it internally. Either way, the motivation is the same: the stack is fragmented, and the business wants consistency.

The hidden issue is what happens next. Once you try to scale learning across sites and seasons, storing data is no longer the hard part. The hard part is keeping the links and definitions stable as the real world changes.

 

Aquaculture Data Isn't Just "Data". It's Populations, Events, and Time.

A lot of modern platforms assume the world looks like transactions, users, sessions, or assets.

Salmon farming does not.

The unit of reality on a farm isn’t a row in a table. It’s a population moving through time, experiencing events, under changing conditions. That population has a history: handling, treatments, feeding changes, environmental exposure, health outcomes. It has boundaries: generation, site, cage, transfers, regrouping.

If your architecture can’t represent that reality cleanly, you can still build dashboards. But intelligence won’t compound. You’ll keep paying to re-explain what happened.

This is why farms can invest heavily in modern tooling and still struggle to answer operational questions consistently such as:

  • Why did the same treatment work at Site A but underperform at Site B?
  • What changed this week that explains the performance shift?
  • Which interventions actually worked under similar conditions?

The data exists in pieces. The problem is that many stacks don’t keep those pieces connected.

 

Integration is the Part Everyone Underestimates

When farms talk about integration, they often mean “we can pull data into one place.” That’s necessary, but it’s not sufficient. The harder problem is semantic integration: making sure the data means the same thing across systems, sites, and time.

This is where reality sets in. A few examples we continue to see:

  • A mortality metric is recorded one way at one site and differently at another.
  • A treatment event exists, but it isn’t reliably linked to the correct cohort after transfers and regrouping.
  • Lice counts, environmental context, and operational notes live in different systems and timelines, so the “same” comparison isn’t actually comparable.

When those links are fragile then trust breaks down, teams slow down, and the work shifts from acting to reconciling.

BI can only reflect the data model underneath it. If populations aren’t traced and events aren’t normalized across systems, BI will still produce charts but it won’t produce comparability, consistency, or trust. Unified reporting isn’t a visualization upgrade. It’s what happens when the underlying data is linked and standardized first.

 

Where General-Purpose Platforms Fit, and Where They Get Expensive

It’s worth being clear: tools like Databricks, Power BI, Snowflake and Palantir aren’t “wrong choices.” They’re powerful systems, especially when an organization has a strong internal platform team.

But they are general-purpose platforms. Aquaculture’s challenge isn’t getting data into a warehouse or building dashboards. It’s domain-specific linkage and operational workflows that stay correct through generations, site moves, vendor changes, and reporting cycles.

Here's a simple way to think about it.

1) Data Platforms (Lakehouse / Warehouse)

These are great at storing data and making it accessible across the organization. They're not inherently designed to model biological production context (however).

To make them work for farm intelligence, you still need to build and maintain:

  • Linkage of fish cohorts as they move between facilities, farms, and cages

  • Attribution of events across systems (for example: tying a PCR result to the correct cage, population, and time window)

  • Stable naming conventions for treatments, products, and operational events so the same thing isn’t tracked five different ways

  • Replicable workflows so all of the above updates automatically and doesn’t depend on manual intervention or one team “knowing how it works”

2) BI Tools

BI is useful for visibility. It’s rarely enough for learning. If the underlying data model isn’t stable, BI becomes a place where teams compensate. You can build dashboards forever and still not be able to answer “what worked” without manual analysis.

3) Operating Platforms

Some platforms can orchestrate workflows and present a coherent UI for decision-making. The catch is that they still need the right data architecture underneath them. Without aquaculture-specific linkage, “orchestration” becomes a polished layer over inconsistent inputs.

This is where many farms end up: strong tools, meaningful investment, but persistent friction. The expensive part becomes ongoing: maintaining coherence as the real world shifts.

 

What a Farm Intelligence Architecture Should Look Like in 2026

A working architecture for farm intelligence isn’t complicated to describe. It’s just hard to implement without domain-native foundations.

It looks like:

Systems of record → standardized ingestion → population + event linkage → stable definitions → operational workflows → analytics that compound

In practical terms, that means a system that can do three things reliably:

  • Resolve reality: link fish populations, movements, events, conditions, and outcomes into a consistent timeline

  • Stabilize meaning: enforce definitions so Site A and Site B are truly comparable

  • Drive action: turn those links into workflows teams can use within the generation, not just at reporting time

Once you have that, AI stops being something you bolt on and instead becomes something the organization can actually operationalize.

 

The 2026 Capability Bar

From an engineering standpoint, the standard for “intelligence” stops being an abstract goal and starts looking like a set of capabilities. 

Unified reporting across systems

Reporting has to be generated from one set of stable definitions, so teams aren’t rebuilding the same narrative across tools and sites every week.

Farm-conditional models

Models need to predict in context, shaped by the farm’s own site conditions, history, operational constraints, and decision patterns, not generic assumptions.

In-season “why”

Teams need the ability to connect actions to outcomes within the generation, so coordination and course-correction happens while the window is still open, not after the fact.

This is where many farms discover the real cost of a DIY approach. You can assemble pieces and build toward these capabilities, but maintaining them as vendors change and operations evolve becomes its own product.

 

The Intelligence Test for Technologists

If you want a clean way to evaluate whether your stack is capable of intelligence, don’t start with “does it use AI?” Start with this: Can it pull answers forward without time-consuming manual work? In practice, that's very specific to questions such as:

  • Which treatments actually worked under similar conditions, and how do we know?

  • Where is risk increasing before it becomes visible in welfare outcomes?

  • What changed this week that explains the performance shift, and what’s the next best action?

  • Can we produce an audit-ready explanation in hours, not days?

If your current stack can’t answer those questions without rebuilding the story each time, that’s the signal. The system doesn’t have a stable way to connect populations, events, and outcomes across tools. Until it does, every “why” question becomes a custom investigation, and intelligence never becomes repeatable.

 

What We're Building Toward at Manolin

At Manolin, our focus in 2026 is not adding another point tool into an already crowded stack. It’s building the intelligence layer farms need so the stack stops behaving like disconnected systems and starts behaving like one learning system.

We’ve built this direction on scale, not theory: 9.1 billion data points modeled, 64+ million fish traced, and 30+ years of global aquaculture data. Those numbers matter for one reason: operational intelligence only works when linkage holds under real-world complexity.

We’re grateful for partners who push us with real constraints and real feedback. The industry deserves technology partners who build at the pace of the industry. We’ll keep shipping throughout the year in close collaboration with the teams doing the work, turning mess and complexity into clearer understanding and learning.

Because in salmon farming, the deadline isn’t the next sprint. It’s the treatment window, the oxygen dip, the handling event, the week when the trend shifts. Those timelines don’t wait for manual rebuilds or perfect dashboards.