Injecting Domain Expertise Into Your AI System

When starting their AI initiatives, many companies are trapped in silos and treat AI as a purely technical enterprise, sidelining domain experts or involving them too late. They end up with generic AI applications that miss industry nuances, produce poor recommendations, and quickly become unpopular with users. By contrast, AI systems that deeply understand industry-specific processes, constraints, and decision logic have the following benefits:

  • Increased efficiency — The more domain knowledge AI incorporates, the less manual effort is required from human experts.
  • Improved adoption — Experts disengage from AI systems that feel too generic. AI must speak their language and align with real workflows to gain trust.
  • sustainable competitive moat — As AI becomes a commodity, embedding proprietary expertise is the most effective way to build defensible AI systems (cf. this article to learn about the building blocks of AI’s competitive advantage).

Domain experts can help you connect the dots between the technicalities of an AI system and its real-life usage and value. Thus, they should be key stakeholders and co-creators of your AI applications. This guide is the first part of my series on expertise-driven AI. Following my mental model of AI systems, it provides a structured approach to embedding deep domain expertise into your AI.

domain expertise AI
Figure 1. Overview of the methods for domain knowledge integration

Throughout the article, we will use the use case of supply chain optimisation (SCO) to illustrate these different methods. Modern supply chains are under constant strain from geopolitical tensions, climate disruptions, and volatile demand shifts, and AI can provide the kind of dynamic, high-coverage intelligence needed to anticipate delays, manage risks, and optimise logistics. However, without domain expertise, these systems are often disconnected from the realities of life. Let’s see how we can solve this by integrating domain expertise across the different components of the AI application.

1. Data: The bedrock of expertise-driven AI

AI is only as domain-aware as the data it learns from. Raw data isn’t enough — it must be curated, refined, and contextualised by experts who understand its meaning in the real world.

Data understanding: Teaching AI what matters

While data scientists can build sophisticated models to analyse patterns and distributions, these analyses often stay at a theoretical, abstract level. Only domain experts can validate whether the data is complete, accurate, and representative of real-world conditions.

In supply chain optimisation, for example, shipment records may contain missing delivery timestamps, inconsistent route details, or unexplained fluctuations in transit times. A data scientist might discard these as noise, but a logistics expert could have real-world explanations of these inconsistencies. For instance, they might be caused by weather-related delays, seasonal port congestion, or supplier reliability issues. If these nuances aren’t accounted for, the AI might learn an overly simplified view of supply chain dynamics, resulting in misleading risk assessments and poor recommendations.

Experts also play a critical role in assessing the completeness of data. AI models work with what they have, assuming that all key factors are already present. It takes human expertise and judgment to identify blind spots. For example, if your supply chain AI isn’t trained on customs clearance times or factory shutdown histories, it won’t be able to predict disruptions caused by regulatory issues or production bottlenecks.

✅ Implementation tip: Run joint Exploratory Data Analysis (EDA) sessions with data scientists and domain experts to identify missing business-critical information, ensuring AI models work with a complete and meaningful dataset, not just statistically clean data.

Data source selection: Start small, expand strategically

One common pitfall when starting with AI is integrating too much data too soon, leading to complexity, congestion of your data pipelines, and blurred or noisy insights. Instead, start with a couple of high-impact data sources and expand incrementally based on AI performance and user needs. For instance, an SCO system may initially use historical shipment data and supplier reliability scores. Over time, domain experts may identify missing information — such as port congestion data or real-time weather forecasts — and point engineers to those data sources where it can be found.

✅ Implementation tip: Start with a minimal, high-value dataset (normally 3–5 data sources), then expand incrementally based on expert feedback and real-world AI performance.

Data annotation

AI models learn by detecting patterns in data, but sometimes, the right learning signals aren’t yet present in raw data. This is where data annotation comes in — by labelling key attributes, domain experts help the AI understand what matters and make better predictions. Consider an AI model built to predict supplier reliability. The model is trained on shipment records, which contain delivery times, delays, and transit routes. However, raw delivery data alone doesn’t capture the full picture of supplier risk — there are no direct labels indicating whether a supplier is “high risk” or “low risk.”

Without more explicit learning signals, the AI might make the wrong conclusions. It could conclude that all delays are equally bad, even when some are caused by predictable seasonal fluctuations. Or it might overlook early warning signs of supplier instability, such as frequent last-minute order changes or inconsistent inventory levels.

Domain experts can enrich the data with more nuanced labels, such as supplier risk categories, disruption causes, and exception-handling rules. By introducing these curated learning signals, you can ensure that AI doesn’t just memorise past trends but learns meaningful, decision-ready insights.

You shouldn’t rush your annotation efforts — instead, think about a structured annotation process that includes the following components:

  • Annotation guidelines: Establish clear, standardized rules for labeling data to ensure consistency. For example, supplier risk categories should be based on defined thresholds (e.g., delivery delays over 5 days + financial instability = high risk).
  • Multiple expert review: Involve several domain experts to reduce bias and ensure objectivity, particularly for subjective classifications like risk levels or disruption impact.
  • Granular labelling: Capture both direct and contextual factors, such as annotating not just shipment delays but also the cause (customs, weather, supplier fault).
  • Continuous refinement: Regularly audit and refine annotations based on AI performance — if predictions consistently miss key risks, experts should adjust labelling strategies accordingly.

✅ Implementation tip: Define an annotation playbook with clear labelling criteria, involve at least two domain experts per critical label for objectivity, and run regular annotation review cycles to ensure AI is learning from accurate, business-relevant insights.

Synthetic data: Preparing AI for rare but critical events

So far, our AI models learn from real-life historical data. However, rare, high-impact events — like factory shutdowns, port closures, or regulatory shifts in our supply chain scenario — may be underrepresented. Without exposure to these scenarios, AI can fail to anticipate major risks, leading to overconfidence in supplier stability and poor contingency planning. Synthetic data solves this by creating more datapoints for rare events, but expert oversight is crucial to ensure that it reflects plausible risks rather than unrealistic patterns.

Let’s say we want to predict supplier reliability in our supply chain system. The historical data may have few recorded supplier failures — but that’s not because failures don’t happen. Rather, many companies proactively mitigate risks before they escalate. Without synthetic examples, AI might deduce that supplier defaults are extremely rare, leading to misguided risk assessments.

Experts can help generate synthetic failure scenarios based on:

  • Historical patterns — Simulating supplier collapses triggered by economic downturns, regulatory shifts, or geopolitical tensions.
  • Hidden risk indicators — Training AI on unrecorded early warning signs, like financial instability or leadership changes.
  • Counterfactuals — Creating “what-if” events, such as a semiconductor supplier suddenly halting production or a prolonged port strike.

✅ Actionable step: Work with domain experts to define high-impact but low-frequency events and scenarios, which can be in focus when you generate synthetic data.

Data makes domain expertise shine. An AI initiative that relies on clean, relevant, and enriched domain data will have an obvious competitive advantage over one that takes the “quick-and-dirty” shortcut to data. However, keep in mind that working with data can be tedious, and experts need to see the outcome of their efforts — whether it’s improving AI-driven risk assessments, optimising supply chain resilience, or enabling smarter decision-making. The key is to make data collaboration intuitive, purpose-driven, and directly tied to business outcomes, so experts remain engaged and motivated.

Intelligence: Making AI systems smarter

Once AI has access to high-quality data, the next challenge is ensuring it generates useful and accurate outputs. Domain expertise is needed to:

  1. Define clear AI objectives aligned with business priorities
  2. Ensure AI correctly interprets industry-specific data
  3. Continuously validate AI’s outputs and recommendations

Let’s look at some common AI approaches and see how they can benefit from an extra shot of domain knowledge.

Training predictive models from scratch

For structured problems like supply chain forecasting, predictive models such as classification and regression can help anticipate delays and suggest optimisations. However, to make sure these models are aligned with business goals, data scientists and knowledge engineers need to work together. For example, an AI model might try to minimise shipment delays at all costs, but a supply chain expert knows that fast-tracking every shipment through air freight is financially unsustainable. They can formulate additional constraints on the model, making it prioritise critical shipments while balancing cost, risk, and lead times.

✅ Implementation tip: Define clear objectives and constraints with domain experts before training AI models, ensuring alignment with real business priorities.

For a detailed overview of predictive AI techniques, please refer to Chapter 4 of my book The Art of AI Product Management.

Navigating the LLM triad

While predictive models trained from scratch can excel at very specific tasks, they are also rigid and will “refuse” to perform any other task. GenAI models are more open-minded and can be used for highly diverse requests. For example, an LLM-based conversational widget in an SCO system can allow users to interact with real-time insights using natural language. Instead of sifting through inflexible dashboards, users can ask, “Which suppliers are at risk of delays?” or “What alternative routes are available?” The AI pulls from historical data, live logistics feeds, and external risk factors to provide actionable answers, suggest mitigations, and even automate workflows like rerouting shipments.

But how can you ensure that a huge, out-of-the-box model like ChatGPT or Llama understands the nuances of your domain? Let’s walk through the LLM triad — a progression of techniques to incorporate domain knowledge into your LLM system.

domain expertise AI
Figure 2: The LLM triad is a progression of techniques for incorporating domain- and company-specific knowledge into your LLM system

As you progress from left to right, you can ingrain more domain knowledge into the LLM — however, each stage also adds new technical challenges (if you are interested in a systematic deep-dive into the LLM triad, please check out chapters 5–8 of my book The Art of AI Product Management). Here, let’s focus on how domain experts can jump in at each of the stages:

  1. Prompting out-of-the-box LLMs might seem like a generic approach, but with the right intuition and skill, domain experts can fine-tune prompts to extract the extra bit of domain knowledge out of the LLM. Personally, I think this is a big part of the fascination around prompting — it puts the most powerful AI models directly into the hands of domain experts without any technical expertise. Some key prompting techniques include:
  • Few-shot prompting: Incorporate examples to guide the model’s responses. Instead of just asking “What are alternative shipping routes?”, a well-crafted prompt includes sample scenarios, such as “Example of past scenario: A previous delay at the Port of Shenzhen was mitigated by rerouting through Ho Chi Minh City, reducing transit time by 3 days.”
  • Chain-of-thought prompting: Encourage step-by-step reasoning for complex logistics queries. Instead of “Why is my shipment delayed?”, a structured prompt might be “Analyse historical delivery data, weather reports, and customs processing times to determine why shipment #12345 is delayed.”
  • Providing further background information: Attach external documents to improve domain-specific responses. For example, prompts could reference real-time port congestion reports, supplier contracts, or risk assessments to generate data-backed recommendations. Most LLM interfaces already allow you to conveniently attach additional files to your prompt.

2. RAG (Retrieval-Augmented Generation): While prompting helps guide AI, it still relies on pre-trained knowledge, which may be outdated or incomplete. RAG allows AI to retrieve real-time, company-specific data, ensuring that its responses are grounded in current logistics reports, supplier performance records, and risk assessments. For example, instead of generating generic supplier risk analyses, a RAG-powered AI system would pull real-time shipment data, supplier credit ratings, and port congestion reports before making recommendations. Domain experts can help select and structure these data sources and are also needed when it comes to testing and evaluating RAG systems.

✅ Implementation tip: Work with domain experts to curate and structure knowledge sources — ensuring AI retrieves and applies only the most relevant and high-quality business information.

3. Fine-tuning: While prompting and RAG inject domain knowledge on-the-fly, they do not inherently embed supply domain-specific workflows, terminology, or decision logic into your LLM. Fine-tuning adapts the LLM to think like a logistics expert. Domain experts can guide this process by creating high-quality training data, ensuring AI learns from real supplier assessments, risk evaluations, and procurement decisions. They can refine industry terminology to prevent misinterpretations (e.g., AI distinguishing between “buffer stock” and “safety stock”). They also align AI’s reasoning with business logic, ensuring it considers cost, risk, and compliance — not just efficiency. Finally, they evaluate fine-tuned models, testing AI against real-world decisions to catch biases or blind spots.

✅ Implementation tip: In LLM fine-tuning, data is the crucial success factor. Quality goes over quantity, and fine-tuning on a small, high-quality dataset can give you excellent results. Thus, give your experts enough time to figure out the right structure and content of the fine-tuning data and plan for plenty of end-to-end iterations of your fine-tuning process.

Encoding expert knowledge with neuro-symbolic AI

Every machine learning algorithm gets it wrong from time to time. To mitigate errors, it helps to set the “hard facts” of your domain in stone, making your AI system more reliable and controllable. This combination of machine learning and deterministic rules is called neuro-symbolic AI.

For example, an explicit knowledge graph can encode supplier relationships, regulatory constraints, transportation networks, and risk dependencies in a structured, interconnected format.

domain expertise AI
Figure 3: Knowledge graphs explicitly encode relationships between entities, reducing the guesswork in your AI system

Instead of relying purely on statistical correlations, an AI system enriched with knowledge graphs can:

  • Validate predictions against domain-specific rules (e.g., ensuring that AI-generated supplier recommendations comply with regulatory requirements).
  • Infer missing information (e.g., if a supplier has no historical delays but shares dependencies with high-risk suppliers, AI can assess its potential risk).
  • Improve explainability by allowing AI decisions to be traced back to logical, rule-based reasoning rather than black-box statistical outputs.

How can you decide which knowledge should be encoded with rules (symbolic AI), and which should be learned dynamically from the data (neural AI)? Domain experts can help youpick those bits of knowledge where hard-coding makes the most sense:

  • Knowledge that is relatively stable over time
  • Knowledge that is hard to infer from the data, for example because it is not well-represented
  • Knowledge that is critical for high-impact decisions in your domain, so you can’t afford to get it wrong

In most cases, this knowledge will be stored in separate components of your AI system, like decision trees, knowledge graphs, and ontologies. There are also some methods to integrate it directly into LLMs and other statistical models, such as Lamini’s memory fine-tuning.

Compound AI and modular workflows

Generating insights and turning them into actions is a multi-step process. Experts can help you model workflows and decision-making pipelines, ensuring that the process followed by your AI system aligns with their tasks. For example, the following pipeline shows how the AI components we considered so far can be combined into a modular workflow for the mitigation of shipment risks:

domain expertise AI
Figure 4: A combined workflow for the assessment and mitigation of shipment risks

Experts are also needed to calibrate the “labor distribution” between humans in AI. For example, when modelling decision logic, they can set thresholds for automation, deciding when AI can trigger workflows versus when human approval is needed.

✅ Implementation tip: Involve your domain experts in mapping your processes to AI models and assets, identifying gaps vs. steps that can already be automated.

Designing ergonomic user experiences

Especially in B2B environments, where workers are deeply embedded in their daily workflows, the user experience must be seamlessly integrated with existing processes and task structures to ensure efficiency and adoption. For example, an AI-powered supply chain tool must align with how logistics professionals think, work, and make decisions. In the development phase, domain experts are the closest “peers” to your real users, and picking their brains is one of the fastest ways to bridge the gap between AI capabilities and real-world usability.

✅ Implementation tip: Involve domain experts early in UX design to ensure AI interfaces are intuitive, relevant, and tailored to real decision-making workflows.

Ensuring transparency and trust in AI decisions

AI thinks differently from humans, which makes us humans skeptical. Often, that’s a good thing since it helps us stay alert to potential mistakes. But distrust is also one of the biggest barriers to AI adoption. When users don’t understand why a system makes a particular recommendation, they are less likely to work with it. Domain experts can define how AI should explain itself — ensuring users have visibility into confidence scores, decision logic, and key influencing factors.

For example, if an SCO system recommends rerouting a shipment, it would be irresponsible on the part of a logistics planner to just accept it. She needs to see the “why” behind the recommendation — is it due to supplier risk, port congestion, or fuel cost spikes? The UX should show a breakdown of the decision, backed by additional information like historical data, risk factors, and a cost-benefit analysis.

⚠ Mitigate overreliance on AI: Excessive dependence of your users on AI can introduce bias, errors, and unforeseen failures. Experts should find ways to calibrate AI-driven insights vs. human expertise, ethical oversight, and strategic safeguards to ensure resilience, adaptability, and trust in decision-making.

✅ Implementation tip: Work with domain experts to define key explainability features — such as confidence scores, data sources, and impact summaries — so users can quickly assess AI-driven recommendations.

Simplifying AI interactions without losing depth

AI tools should make complex decisions easier, not harder. If users need deep technical knowledge to extract insights from AI, the system has failed from a UX perspective. Domain experts can help strike a balance between simplicity and depth, ensuring the interface provides actionable, context-aware recommendations while allowing deeper analysis when needed.

For instance, instead of forcing users to manually sift through data tables, AI could provide pre-configured reports based on common logistics challenges. However, expert users should also have on-demand access to raw data and advanced settings when necessary. The key is to design AI interactions that are efficient for everyday use but flexible for deep analysis when required.

✅ Implementation tip: Use domain expert feedback to define default views, priority alerts, and user-configurable settings, ensuring AI interfaces provide both efficiency for routine tasks and depth for deeper research and strategic decisions.

Continuous UX testing and iteration with experts

AI UX isn’t a one-and-done process — it needs to evolve with real-world user feedback. Domain experts play a key role in UX testing, refinement, and iteration, ensuring that AI-driven workflows stay aligned with business needs and user expectations.

For example, your initial interface may surface too many low-priority alerts, leading to alert fatigue where users start ignoring AI recommendations. Supply chain experts can identify which alerts are most valuable, allowing UX designers to prioritize high-impact insights while reducing noise.

✅ Implementation tip: Conduct think-aloud sessions and have domain experts verbalize their thought process when interacting with your AI interface. This helps AI teams uncover hidden assumptions and refine AI based on how experts actually think and make decisions.

Conclusion

Vertical AI systems must integrate domain knowledge at every stage, and experts should become key stakeholders in your AI development:

  • They refine data selection, annotation, and synthetic data.
  • They guide AI learning through prompting, RAG, and fine-tuning.
  • They support the design of seamless user experiences that integrate with daily workflows in a transparent and trustworthy way.

An AI system that “gets” the domain of your users will not only be useful and adopted in the short- and middle-term, but also contribute to the competitive advantage of your business.

Now that you have learned a bunch of methods to incorporate domain-specific knowledge, you might be wondering how to approach this in your organizational context. Stay tuned for my next article, where we will consider the practical challenges and strategies for implementing an expertise-driven AI strategy!

Note: Unless noted otherwise, all images are the author’s.

This article was originally published on Towards Data Science and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI updates.

We’ll let you know when we release more articles like this one.

The post Injecting Domain Expertise Into Your AI System appeared first on TOPBOTS.

Carving Out Your Competitive Advantage With AI

When I talk to corporate customers, there is often this idea that AI, while powerful, won’t give any company a lasting competitive edge. After all, over the past two years, large-scale LLMs have become a commodity for everyone. I’ve been thinking a lot about how companies can shape a competitive advantage using AI, and a recent article in the Harvard Business Review (AI Won’t Give You a New Sustainable Advantage) inspired me to organize my thoughts around the topic.

Indeed, maybe one day, when businesses and markets are driven by the invisible hand of AI, the equal-opportunity hypothesis might ring true. But until then, there are so many ways — big and small — for companies to differentiate themselves using AI. I like to think of it as a complex ingredient in your business recipe — the success of the final dish depends on the cook who is making it. The magic lies in how you combine AI craft with strategy, design, and execution.

In this article, I’ll focus on real-life business applications of AI and explore their key sources of competitive advantage. As we’ll see, successful AI integration goes far beyond technology, and certainly beyond having the trendiest LLM at work. It’s about finding AI’s unique sweet spot in your organization, making critical design decisions, and aligning a variety of stakeholders around the optimal design, deployment, and usage of your AI systems. In the following, I will illustrate this using the mental model we developed to structure our thinking about AI projects (cf. this article for an in-depth introduction).

AI competitive advantage
Figure 1: Sources of competitive advantage in an AI system (cf. this article for an explanation of the mental model for AI systems)

AI opportunities aren’t created equal

AI is often used to automate existing tasks, but the more space you allow for creativity and innovation when selecting your AI use cases, the more likely they will result in a competitive advantage. You should also prioritize the unique needs and strengths of your company when evaluating opportunities.

Identifying use cases with differentiation potential

When we brainstorm AI use cases with customers, 90% of them typically fall into one of four buckets — productivity, improvement, personalization, and innovation. Let’s take the example of an airline business to illustrate some opportunities across these categories:

AI competitive advantage
Figure 2: Mapping AI opportunities for an airline

Of course, the first branch — productivity and automation — looks like the low-hanging fruit. It is the easiest one to implement, and automating boring routine tasks has an undeniable efficiency benefit. However, if you’re limiting your use of AI to basic automation, don’t be surprised when your competitors do the same. In our experience, strategic advantage is built up in the other branches. Companies that take the time to figure out how AI can help them offer something different, not just faster or cheaper, are the ones that see long-term results.

As an example, let’s look at a project we recently implemented with the Lufthansa Group. The company wanted to systematize and speed up its innovation processes. We developed an AI tool that acts as a giant sensor into the airline market, monitoring competitors, trends, and the overall market context. Based on this broad information, the tool now provides tailored innovation recommendations for Lufthansa. There are several aspects that cannot be easily imitated by potential competitors, and certainly not by just using a bigger AI model:

  • Understanding which information exactly is needed to make decisions about new innovation initiatives
  • Blending public data with unique company-specific knowledge
  • Educating users at company scale on the right usage of the data in their assessment of new innovation initiatives

All of this is novel know-how that was developed in tight cooperation between industry experts, practitioners, and a specialized AI team, involving lots of discovery, design decisions, and stakeholder alignment. If you get all of these aspects right, I believe you are on a good path toward creating a sustainable and defensible advantage with AI.

Finding your unique sweet spot for value creation

Value creation with AI is a highly individual affair. I recently experienced this firsthand when I challenged myself to build and launch an end-to-end AI app on my own. I’m comfortable with Python and don’t massively benefit from AI help there, but other stuff like frontend? Not really my home turf. In this situation, AI-powered code generation worked like a charm. It felt like flowing through an effortless no-code tool, while having all the versatility of the underlying — and unfamiliar — programming languages under my fingertips. This was my very own, personal sweet spot — using AI where it unlocks value I wouldn’t otherwise tap into, and sparing a frontend developer on the way. Most other people would not get so much value out of this case:

  • A professional front-end developer would not see such a drastic increase in speed.
  • A person without programming experience would hardly ever get to the finish line. You must understand how programming works to correctly prompt an AI model and integrate its outputs.

While this is a personal example, the same principle applies at the corporate level. For good or for bad, most companies have some notion of strategy and core competence driving their business. The secret is about finding the right place for AI in that equation — a place where it will complement and amplify the existing skills.

Data — a game of effort

Data is the fuel for any AI system. Here, success comes from curating high-quality, focused datasets and continuously adapting them to evolving needs. By blending AI with your unique expertise and treating data as a dynamic resource, you can transform information into long-term strategic value.

Managing knowledge and domain expertise

To illustrate the importance of proper knowledge management, let’s do a thought experiment and travel to the 16th century. Antonio and Bartolomeo are the best shoemakers in Florence (which means they’re probably the best in the world). Antonio’s family has meticulously recorded their craft for generations, with shelves of notes on leather treatments, perfect fits, and small adjustments learned from years of experience. On the other hand, Bartolomeo’s family has kept their secrets more closely guarded. They don’t write anything down; their shoemaking expertise has been passed down verbally, from father to son.

Now, a visionary named Leonardo comes along, offering both families a groundbreaking technology that can automate their whole shoemaking business — if it can learn from their data. Antonio comes with his wagon of detailed documentation, and the technology can directly learn from those centuries of know-how. Bartolomeo is in trouble — without written records, there’s nothing explicit for the AI to chew on. His family’s expertise is trapped in oral tradition, intuition, and muscle memory. Should he try to write all of it down now — is it even possible, given that most of his work is governed intuitively? Or should he just let it be and go on with his manual business-as-usual? Succumbing to inertia and uncertainty, he goes for the latter option, while Antonio’s business strives and grows with the help of the new technology. Freed from daily routine tasks, he can get creative and invent new ways to make and improve shoes.

Beyond explicit documentation, valuable domain expertise is also hidden across other data assets such as transactional data, customer interactions, and market insights. AI thrives on this kind of information, extracting meaning and patterns that would otherwise go unnoticed by humans.

Quality over quantity

Data doesn’t need to be big — on the contrary, today, big often means noisy. What’s critical is the quality of the data you’re feeding into your AI system. As models become more sample-efficient — i.e., able to learn from smaller, more focused datasets — the kind of data you use is far more important than how much of it you have.

In my experience, the companies that succeed with AI treat their data — be it for training, fine-tuning, or evaluation — like a craft. They don’t just gather information passively; they curate and edit it, refining and selecting data that reflects a deep understanding of their specific industry. This careful approach gives their AI sharper insights and a more nuanced understanding than any competitor using a generic dataset. I’ve seen firsthand how even small improvements in data quality can lead to significant leaps in AI performance.

Capturing the dynamics with the data flywheel

Data needs to evolve along with the real world. That’s where DataOps comes in, ensuring data is continuously adapted and doesn’t drift apart from reality. The most successful companies understand this and regularly update their datasets to reflect changing environments and market dynamics. A power mechanism to achieve this is the data flywheel. The more your AI generates insights, the better your data becomes, creating a self-reinforcing feedback loop because users will come back to your system more often. With every cycle, your data sharpens and your AI improves, building an advantage that competitors will struggle to match. To kick off the data flywheel, your system needs to demonstrate some initial value to start with — and then, you can bake in some additional incentives to nudge your users into using your system on a regular basis.

AI competitive advantage
Figure 3: The data flywheel is a self-reinforcing feedback loop between users and the AI system

Intelligence: Sharpening your AI tools

Now, let’s dive into the “intelligence” component. This component isn’t just about AI models in isolation — it’s about how you integrate them into larger intelligent systems. Big Tech is working hard to make us believe that AI success hinges on the use of massive LLMs such as the GPT models. Good for them — bad for those of us who want to use AI in real-life applications. Overrelying on these heavyweights can bloat your system and quickly become a costly liability, while smart system design and tailored models are important sources for differentiation and competitive advantage.

Toward customization and efficiency

Mainstream LLMs are generalists. Like high-school graduates, they have a mediocre-to-decent performance across a wide range of tasks. However, in business, decent isn’t enough. You need to send your AI model to university so it can specialize, respond to your specific business needs, and excel in your domain. This is where fine-tuning comes into play. However, it’s important to recognize that mainstream LLMs, while powerful, can quickly become slow and expensive if not managed efficiently. As Big Tech boasts about larger model sizes and longer context windows — i.e., how much information you can feed into one prompt — smart tech is quietly moving towards efficiency. Techniques like prompt compression reduce prompt size, making interactions faster and more cost-effective. Small language models (SLMs) are another trend (Figure 4). With up to a couple of billions of parameters, they allow companies to safely deploy task- and domain-specific intelligence on their internal infrastructure (Anacode).

AI competitive advantage
Figure 4: Small Language Models are gaining attention as the inefficiencies of mainstream LLMs become apparent

But before fine-tuning an LLM, ask yourself whether generative AI is even the right solution for your specific challenge. In many cases, predictive AI models — those that focus on forecasting outcomes rather than generating content — are more effective, cheaper, and easier to defend from a competitive standpoint. And while this might sound like old news, most of AI value creation in businesses actually happens with predictive AI.

Crafting compound AI systems

AI models don’t operate in isolation. Just as the human brain consists of multiple regions, each responsible for specific capabilities like reasoning, vision, and language, a truly intelligent AI system often involves multiple components. This is also called a “compound AI system” (BAIR). Compound systems can accommodate different models, databases, and software tools and allow you to optimize for cost and transparency. They also enable faster iteration and extension — modular components are easier to test and rearrange than a huge monolithic LLM.

AI competitive advantage
Figure 5: Companies are moving from monolithic models to compound AI systems for better customization, transparency, and iteration (image adapted from BAIR)

Take, for example, a customer service automation system for an SME. In its basic form — calling a commercial LLM — such a setup might cost you a significant amount — let’s say $21k/month for a “vanilla” system. This cost can easily scare away an SME, and they will not touch the opportunity at all. However, with careful engineering, optimization, and the integration of multiple models, the costs can be reduced by as much as 98% (FrugalGPT). Yes, you read it right, that’s 2% of the original cost — a staggering difference, putting a company with stronger AI and engineering skills at a clear advantage. At the moment, most businesses are not leveraging these advanced techniques, and we can only imagine how much there is yet to optimize in their AI usage.

Generative AI isn’t the finish line

While generative AI has captured everyone’s imagination with its ability to produce content, the real future of AI lies in reasoning and problem-solving. Unlike content generation, reasoning is nonlinear — it involves skills like abstraction and generalization which generative AI models aren’t trained for.

AI systems of the future will need to handle complex, multi-step activities that go far beyond what current generative models can do. We’re already seeing early demonstrations of AI’s reasoning capabilities, whether through language-based emulations or engineered add-ons. However, the limitations are apparent — past a certain threshold of complexity, these models start to hallucinate. Companies that invest in crafting AI systems designed to handle these complex, iterative processes will have a major head start. These companies will thrive as AI moves beyond its current generative phase and into a new era of smart, modular, and reasoning-driven systems.

User experience: Seamless integration into user workflows

User experience is the channel through which you can deliver the value of AI to users. It should smoothly transport the benefits users need to speed up and perfect their workflows, while inherent AI risks and issues such as erroneous outputs need to be filtered or mitigated.

Optimizing on the strengths of humans and AI

In most real-world scenarios, AI alone can’t achieve full automation. For example, at my company Equintel, we use AI to assist in the ESG reporting process, which involves multiple layers of analysis and decision-making. While AI excels at large-scale data processing, there are many subtasks that demand human judgment, creativity, and expertise. An ergonomic system design reflects this labor distribution, relieving humans from tedious data routines and giving them the space to focus on their strengths.

This strength-based approach also alleviates common fears of job replacement. When employees are empowered to focus on tasks where their skills shine, they’re more likely to view AI as a supporting tool, not a competitor. This fosters a win-win situation where both humans and AI thrive by working together.

Calibrating user trust

Every AI model has an inherent failure rate. Whether generative AI hallucinations or incorrect outputs from predictive models, mistakes happen and accumulate into the dreaded “last-mile problem.” Even if your AI system performs well 90% of the time, a small error rate can quickly become a showstopper if users overtrust the system and don’t address its errors.

Consider a bank using AI for fraud detection. If the AI fails to flag a fraudulent transaction and the user doesn’t catch it, the resulting loss could be significant — let’s say $500,000 siphoned from a compromised account. Without proper trust calibration, users might lack the tools or alerts to question the AI’s decision, allowing fraud to go unnoticed.

Now, imagine another bank using the same system but with proper trust calibration in place. When the AI is uncertain about a transaction, it flags it for review, even if it doesn’t outright classify it as fraud. This additional layer of trust calibration encourages the user to investigate further, potentially catching fraud that would have slipped through. In this scenario, the bank could avoid the $500,000 loss. Multiply that across multiple transactions, and the savings — along with improved security and customer trust — are substantial.

Combining AI efficiency and human ingenuity is the new competitive frontier

Success with AI requires more than just adopting the latest technologies — it’s about identifying and nurturing the individual sweet spots where AI can drive the most value for your business. This involves:

  • Pinpointing the areas where AI can create a significant impact.
  • Aligning a top-tier team of engineers, domain experts, and business stakeholders to design AI systems that meet these needs.
  • Ensuring effective AI adoption by educating users on how to maximize its benefits.

Finally, I believe we are moving into a time when the notion of competitive advantage itself is shaken up. While in the past, competing was all about maximizing profitability, today, businesses are expected to balance financial gains with sustainability, which adds a new layer of complexity. AI has the potential to help companies not only optimize their operations but also move toward more sustainable practices. Imagine AI helping to reduce plastic waste, streamline shared economy models, or support other initiatives that make the world a better place. The real power of AI lies not just in efficiency but in the potential it offers us to reshape whole industries and drive both profit and positive social impact.

For deep-dives into many of the topics that were touched in this article, check out my upcoming book The Art of AI Product Development.

Note: Unless noted otherwise, all images are the author’s.

This article was originally published on Towards Data Science and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI updates.

We’ll let you know when we release more articles like this one.

The post Carving Out Your Competitive Advantage With AI appeared first on TOPBOTS.