
Leadership Insights
Étiquetage, marquage et codage
Étiquetage avec BarTender
Suivi des articles et des stocks
BarTender Track & Trace
Par cas d’utilisation
Par secteur d’activité
Par norme
Connecter
Gus Rivera, Seagull Software
04.27.2026

I spent the better part of two years not building AI. I was cleaning data.
At the time, we were scaling our Track & Trace platform, and I assumed the hardest work would sit in the AI layer, models, algorithms, optimization. It didn’t. Roughly 80% of the effort went into something far less glamorous: fixing the data foundation. Cleansing errors. Resolving inconsistencies. Capturing operational signals that weren’t being recorded at all.
It wasn’t exciting work. But it was necessary. Because before you can build intelligence, you have to make the data worthy of it.
That lesson is showing up everywhere now. Across the industry, there’s a growing realization that most AI initiatives don’t fail because the models are inadequate, they fail because the data underneath them isn’t reliable. Recent research puts a number on this problem. Up to 95% of GenAI initiatives in supply chain failed to deliver sustained ROI. Not because of the models. Because of fragmented data, siloed systems, and manual workflows.
AI didn’t fail. Data governance did.
What the models can’t fix
There’s a tendency to believe that better models will compensate for imperfect data. In practice, the opposite happens. AI systems are very good at generating answers that sound confident, even when they’re based on incomplete or inaccurate inputs.
In a supply chain context, that’s not just a technical issue, it’s an operational risk.
Data doesn’t flow cleanly across most environments. It sits in isolated systems: a warehouse management system here, a labeling platform there, ERP updates that lag behind real-world events. When you layer AI on top of that, you’re not creating clarity, you’re scaling ambiguity.
And at scale, a wrong answer is often more damaging than no answer at all. The organizations seeing real results are the ones that flipped the sequence. They didn’t start with AI. They started by fixing how data is created, captured, and connected.
Where the real data lives
One of the most overlooked sources of high-quality data in the supply chain is also one of the most ubiquitous: the label. Every time a label is printed or scanned, it captures something meaningful. Identity. Location. Movement. Status. These aren’t abstract data points, they’re operational facts, recorded in real time as products move through manufacturing, distribution, and fulfillment.
Over time, those events form a continuous, item-level history. That’s what a true digital twin looks like, not a static record or a supplier-provided declaration, but a living timeline of what happened, at each step. It’s built incrementally, through thousands of small, consistent data capture moments.
You don’t assemble that overnight. It becomes a durable advantage over time, a data asset that competitors can’t easily replicate.
At the same time, external pressure is increasing. Regulatory requirements like GS1 Sunrise 2027, FDA traceability rules, and digital product passport initiatives in the EU are forcing organizations to take data integrity more seriously. The timeline isn’t theoretical anymore.
But compliance is just the forcing function. The real value comes from what that data enables once it’s trustworthy.
From clean data to real outcomes
When the data foundation is solid, the use cases become tangible. You can detect anomalies early, before disruptions ripple across the network. You can identify supplier performance patterns that wouldn’t surface in a static report. You can move from reactive inventory management to predictive positioning.
Even small improvements matter. Stockouts alone account for an estimated 4–7% of annual revenue loss in many organizations. That’s not a modeling problem. It’s a visibility problem rooted in data quality. And this is where the conversation is heading next.
Agentic AI, systems made up of multiple coordinated agents, operating across edge and cloud environments, is quickly becoming the next frontier. But these systems are even more dependent on clean, consistent inputs. They don’t just analyze data; they act on it. Which raises the stakes.
One question worth asking
Before companies make an AI investment, it’s worth pausing on a simple question: If you fed your current operational data into a model today, would you trust the output? If the answer is no, that’s not a failure. It’s a starting point.
Because the companies that are getting real value from AI didn’t wait for better models. They did the harder work earlier, building a data foundation that could support what comes next. Everything else builds from there.
—
Gus Rivera is the Chief Technology Officer of Seagull Software, with more than 20 years of experience building companies, teams, and innovative cloud-native software products. He is the chief architect of our Track & Trace item & inventory tracking platform. Gus also leads Seagull’s software engineering, cloudops, and support teams.

Leadership Insights

Leadership Insights

Gestion du système d’étiquetage
