2026 is the first year AI conversations in salmon farming feel less like a trend and more like a serious operational question. Not because farms suddenly want more technology, but because the cost of being late is clearer than ever: a treatment window you miss, a welfare hit that escalates, a bad week that doesn’t stay contained.
Once you start evaluating “AI systems,” you’ll notice how quickly the vocabulary converges. “Insights.” “Better decisions.” “Automation.” Those aren’t wrong goals, they’re just broad. Most of us in the industry have used the same language at some point because it’s the simplest way to describe what everyone is aiming for.
The challenge is that farms don’t need more words in 2026. They need a way to separate visibility from learning, tools that help you see what’s happening from systems that help you repeat what works across sites and seasons. After a few demos, the practical question becomes the one that matters:
What do we get on-farm that actually changes outcomes, and keeps improving over time?
That question lands differently depending on your role. On site, it’s the weight of more signals before you can act. In fish health, it’s the gap between seeing a signal and understanding what changed in context. And at the executive level, it’s variance: the same strategy on paper, different outcomes in practice, and no fast, trusted way to explain why.
This is what’s driving the AI discussion now. Not hype. A need for decision speed and learning that carries.
AI is a tool. Intelligence is a capability.
Many farms already have tools with algorithms inside them: cameras, sensors, feeding systems, reporting, forecasting experiments. But adding tools doesn’t automatically create a consistent way to learn and act across sites.
In 2026, a lot of farms are asking for models: lice risk, mortality risk, growth, harvest timing, treatment effectiveness. That’s rational. Models can be powerful.
But models only become operational when they sit inside a system that can do three things reliably:
That’s what we mean by intelligence. Not “AI” as a feature but a connective system that unifies what you already use and turns it into repeatable learning.
You feel the difference in the questions your organization can answer without a scramble:
When that exists, models stop living as experiments. They become part of how the farm operates. It also makes the bottleneck obvious: the hard part usually isn’t buying tools, it’s connecting them into learning.
A treatment works well at one site and disappoints at another. On paper, the plan looks the same. The team did what they were supposed to do. Then post-treatment mortality rises and everyone starts hunting for context across systems: environmental history, lice counts, handling events, feeding changes, fish group history, operational notes.
It’s not that the data doesn’t exist. It’s that it isn’t connected in a way the farm can learn from quickly.
That’s the hidden blocker in many “AI initiatives” in salmon farming. Not models. Not sensors. Not dashboards.
Linking.
Most farms have a technology stack that can record what happened and show what’s happening. But when the same story is split across systems, and sometimes split across definitions, the organization can’t reliably turn outcomes into learning that carries forward.
This is where a lot of the Power BI dependence comes from. Teams end up rebuilding the same narrative repeatedly: pulling from multiple platforms, reconciling definitions, recreating graphs, pasting into documents, and repeating under deadline. Over time, that becomes a process farms never meant to build: reconciliation.
This is also why AI can feel underwhelming. Without correct links between fish populations, events, conditions, and outcomes, even good models don’t become operational. They stay informational.
Once you see that clearly, vendor comparisons get easier. The question stops being “which tool has the best AI?” and becomes:
What will actually connect our stack into learning we can run the farm on?
There’s a point where adding another tool doesn’t make a farm more capable. It just adds another place for the truth to split.
That point shows up as complexity: multiple sites, multiple teams, multiple vendor systems, and multiple definitions of the same metric depending on who you ask. The stack grows, but the organization doesn’t get faster. Decisions slow down because time shifts from farming to reconciliation.
You can feel it when a regional strategy exists on paper, but execution varies by site. When the best-performing sites can’t be replicated reliably. When audit readiness still means scrambling. When forecasting is discussed, but never becomes operational.
At that stage, the limiting factor isn’t data access. It’s decision velocity and organizational learning.
Farms don’t need another dashboard. They need a way to make learning portable across teams, sites, and seasons without building a large internal analytics organization just to keep everything stitched together.
Once you cross that threshold, intelligence stops being a nice-to-have. It becomes the difference between running the farm on experience that compounds, or running it on experience that resets every season.
Biology moves faster than organizations. That’s always been true in salmon farming. What’s changed is how expensive that gap has become.
Regulation tightens faster than workflows evolve. Costs rise faster than process improvements pay back. When the organization is slow to turn signals into decisions, you don’t just lose time, you lose the window.
In 2026, farms that outperform won’t necessarily be the farms with the most tools. They’ll be the farms with the fastest learning system.
Teams that can connect what’s happening now to what’s happened before, make a call with confidence under constraints, and carry the result forward so the next decision is easier. Not because they guessed right, but because the system gets smarter with use.
That’s what intelligence looks like on a farm: decision velocity that compounds.
So where is this conversation going in 2026? Away from isolated outputs and toward operational capability. The bar is rising. Farms are increasingly expecting systems to do more than analyze. They’re expecting them to make the organization faster, more consistent, and less dependent on manual reporting.
Reporting has to stop looking like report-building. The workflow most teams live with today, pulling data from multiple systems, rebuilding the same graphs, pasting into documents, repeating under deadline, isn’t sustainable if audit pressure and internal performance demands keep increasing. Farms are starting to expect reporting that is unified, consistent across sites, and generated from a single set of definitions.
Models will stop being generic. Farms will still ask for forecasting, but the expectation is shifting from “a model” to a model that reflects the farm’s reality: site conditions, history, operational constraints, decision patterns. In other words, models that predict in context.
Teams are also starting to demand answers to “why,” not just “what.” Not as a post-season analysis project, but as something they can use while the season is still unfolding. The ability to connect outcomes back to the conditions and actions that preceded them is what turns a one-off decision into a playbook.
This is the quiet shift: data stops being something you look at and becomes something you learn from.
If you want a clean way to evaluate systems this year, don’t start by asking whether they “use AI.” Start by asking whether they reduce uncertainty and make decisions easier to execute, explain, and repeat.
The most useful systems will help your team answer questions like:
If your tech stack can’t answer those questions without manual stitching, the gap usually isn’t effort.
It’s linkage.
Farms don’t need more data. They need less uncertainty, and a system that turns cross-system reality into learning the organization can run on.