Forget Fine-Tuning: Orchestrate Your ERP AI

Have a conversation with manufacturing executives about AI, and you will hear a familiar goal: “We need to train a custom model on all our historical ERP data so it learns our unique business.”
Believe me or not, I hear this more often than ever these days. Leaders believe that by dumping twenty years of customized tables, legacy code, and unique workflows into a Large Language Model, the AI will magically absorb their secret sauce no matter what.
Let me be entirely clear: this shows how poor is the actual knowledge on how Modern Artificial Intelligence works. More importantly, it is the fastest way to burn tons of money on a stalled IT project.
Stop trying to train GenAI on your legacy ERP data. It doesn’t work, and frankly, it isn’t necessary.
Let me break down exactly why the fine-tuning trap will bankrupt enterprise AI initiatives, and how I advise everyone to actually get autonomous agents to execute highly customized business logic without destroying their budgets.
The Fine-Tuning Trap
When IT leaders talk about teaching an AI their specific business rules, they usually default to a process called fine-tuning. This involves taking a pre-trained model and aggressively adjusting its internal weights (the mathematical connections that determine how the AI thinks and values information) by feeding it your company’s highly customized historical data.
From an architectural perspective, there are three critical issues with fine-tuning an AI on a legacy ERP:
1. Semantic Gap
AI models are pre-trained to understand global, standard business semantics. They know what a standard Order-to-Cash flow looks like across the industry. If your competitive advantage is buried in a custom database table called t_z_special_discount_v3, the AI can read the text, but it completely lacks the contextual understanding of what that table actually does. You are essentially asking a brilliant but generalized brain to read a language only one retired developer in your company speaks.
2. Extreme Rigidity
Fine-tuning is essentially forcing the AI to memorize rigid patterns. If your pricing logic changes next month to adapt to a new market shift, your fine-tuned model is instantly obsolete. You are forced to start the expensive, time-consuming training cycle all over again.
3. The Hallucination Risk
LLMs are fantastic at language, but they are terrible at rigid mathematics and complex flowcharts. If you try to force a probabilistic model to memorize a 50-step proprietary supply chain logic, I can guarantee it will eventually hallucinate, confidently executing the wrong process at lightspeed.
If we don’t fine-tune the AI, how do we get it to understand and execute our unique competitive advantage?
In a modern enterprise architecture, we don’t train the AI on our business rules. We contextualize it, and we give it tools. We achieve this through two paradigms: RAG and Function Calling.
RAG (Retrieval-Augmented Generation)
Instead of forcing the AI to memorize your complex company policies, supplier agreements, or unique warehousing rules, you use RAG.
Think of RAG as giving the AI an open-book exam.
Instead of altering the AI’s internal brain, you take all the documentation of your secret sauce (your standard operating procedures, your policy manuals, …) and index them into a secure Vector Database.
When a planner asks the AI, “How do we handle a delayed shipment from Supplier X for this specific product line?”, the AI doesn’t guess based on its pre-training. First, it searches your Vector Database. It retrieves your exact, proprietary rule for that specific scenario, reads it, and formulates a response combining its fluid intelligence with your rigid, factual rules.
You never trained the AI. You simply gave it the exact document it needed, exactly when it needed it. If your policy changes tomorrow, you just update the document in the database. Zero re-training required.
Function Calling: The Agent as an Orchestrator
While RAG is fantastic for text and corporate policies, what about transactional business logic? What if your secret sauce is a highly complex algorithm that calculates customized freight costs based on real-time, multi-variable constraints?
You absolutely do not want an LLM trying to do that math. Instead, you use Function Calling (Agentic APIs). This is exactly where the concept of Cloud ERP Extensibility truly shines.
In a modern Cloud ERP environment like Infor LN, your custom business logic is no longer buried inside the core database, but is rather built on the perimeter of the system as an extension, accessible via secure APIs.
With Function Calling, you expose the existence and purpose of your freight algorithm directly to the AI.
You provide the AI with a digital toolbox. You tell it: “When a user asks to calculate custom freight costs, do not try to do the math yourself. Call this specific API, feed it the destination and weight, and give the user back the exact number it returns.”
The AI transforms into an orchestrator. It acts as the intelligent bridge between the human user and your hard-coded secret sauce. The AI handles the natural language intent, but your flawless, custom-built API extension does the actual heavy lifting.
Trade-Offs
To be intellectually honest, we must acknowledge that moving away from fine-tuning and relying entirely on RAG and Function Calling is not a magic bullet, and inevitably, sooner or later, we must address the pain points. This architecture comes with its own set of trade-offs:
- Latency and Token Costs: because RAG requires the AI to retrieve and read your documents every single time a question is asked, it adds milliseconds (or seconds) to the response time. Furthermore, sending large chunks of retrieved context to an LLM API repeatedly will consume more tokens, increasing your operational run-rate costs over time.
- Retrieval Dependency: a RAG system is only as smart as its search engine. If your Vector Database retrieves the wrong company policy, the AI will confidently give the user the wrong answer based on that flawed context.
- Modernization Prerequisite: function calling assumes your ERP architecture is modern enough to expose your legacy customizations as clean, consumable APIs. If your secret sauce is hopelessly tangled in monolithic legacy code, you will have to invest heavily in decoupling and modernizing that logic before an AI agent can ever orchestrate it.
My Final Take
The crisis facing legacy ERP users in the coming decade isn’t that Cloud systems will destroy their intellectual property. The harsh reality is that their intellectual property is currently trapped in a format that Artificial Intelligence cannot actually use.
If you want to achieve a massive ROI on enterprise AI, the architectural playbook is clear:
- Stop customizing the core of your ERP. This does not mean you stop customizing the system to fit your business needs. It means you keep the foundational code strictly standard so the AI can read it natively.
- Stop trying to fine-tune AI models on dirty legacy data.
- Rethink your Custom Code and rebuild it as external APIs using modern techniques.
- Give your AI agents access to those APIs via Function Calling and ground their knowledge using RAG.
Your proprietary logic is the most valuable asset your company owns. Stop trying to teach it to a probabilistic algorithm. Turn it into a tool, hand it to the autonomous agent, and let the intelligence do what it does best: orchestrate.
Written by Andrea Guaccio
April 14 2026