DataFlex, AI-agenter and RAG
Business systems with intelligence and initiative
AI agents are autonomous software entities, powered by artificial intelligence, that can perform tasks, make decisions and interact with people and systems – without constant human intervention. They are more than just chatbots: they are digital colleagues that can understand business logic, context and data, and they can be customized to work exactly how you need them to work.
One of the most powerful methods we use is RAG – Retrieval-Augmented Generation. With RAG, an AI agent can not only generate answers or suggestions based on its training, but also actively retrieve and combine information from the organization’s own databases, documents or APIs. The result? An AI that is both creative and accurate, and that knows your reality.
Imagine an AI agent thatr:
- Svarer på komplekse kundespørgsmål ved at læse direkte i jeres supportsystem
- Assists employees by summarizing cases, analyzing historical data and suggesting the next steps.
- Automatically updates the CRM system with relevant information from emails and meeting notes.
For companies working with database-driven business applications developed in DataFlex, AI agents with RAG are particularly interesting. They can access existing business logic, tables and business rules, bridging the gap between the technical world and the user’s intent. This means faster support, more efficient workflows and a significant boost in data-driven decision-making support.
We help you build and integrate AI agents that not only fit into your systems – but understand them, learn from them and improve them. If you want to be among the first to not just talking about AI, but using it purposefully and effectively, let’s talk.
What is RAG – Retrieval-Augmented Generation?
RAG (Retrieval-Augmented Generation)
is an architecture where a language model (LLM) is combined with external information sources – typically a database, document store or API – to provide contextual and fact-based answers. Instead of relying solely on its training data, the model can ‘look up’ your company-specific data in real-time.
For DataFlex developers working with database-based applications, this means the following:
Traditionel LLM vs. RAG
- Traditionel LLM (like ChatGPT alone):
- Guesses a response based on patterns in training data.
- No access to your specific business data.
Risk of hallucinations (i.e. invented answers).
- RAG-architecture:
- Pulls relevant context from your own sources first.
- Sends both the user prompt and the retrieved context to the LLM.
- Provides answers that are both correct, contextually relevant and dynami.
- Traditionel LLM (like ChatGPT alone):
Typisk RAG-flow in a database application
- The user’s questionl:
For example: “What is the total revenue for customer #1042 for the last 12 months?” - Retriever:
- A search function (SQL, Elasticsearch, vector search) finds relevant data points:
- Invoices, orders, notes, etc. related to customer #1042.
- These are extracted and structured as “context”.
- A search function (SQL, Elasticsearch, vector search) finds relevant data points:
- LLM-call:
- The context + the user’s question are sent as a prompt to the language model.
- E.g. via OpenAI, Mistral, Claude, or a local LLM via API.
- Response generation:
- The model responds in natural language:
“Customer #1042 has made sales of 842,500 DKK since May 2024, spread over 18 orders. The largest single order was on 14 Nov 2024 at 112,000 DKK..”
- The model responds in natural language:
- The user’s questionl:
- Benefits for database applications
- No retraining required: You don’t need to train the language model on your data. You simply just pull out the right one when you need it.
- Data is kept in-houset: You can use local language models or proxies with data control and access managementg.
Dynamic and accurate:Answers are always adapted to the latest data, which is crucial in business contexts.
Example technology for RAG in practice
- Retriever:
- SQL via n8n, LangChain, LlamaIndex, Supabase etc.
- Alternatively: vector search with Weaviate, Qdrant, Pinecone.
- Language model:
- Hosted: OpenAI, Claude, Mistral API
- On-prem: LLaMA, Mistral, Command R+ via Ollama or LM Studio.
- Orchestration:
Perfect for n8n: Combine database nodes, file management and HTTP calls in a visual workflow.
- Retriever:
So… why does it matter?
Because it makes your applications intelligent. Not just reactive, but proactive. Not just data-bearing but understanding.
With RAG, we can build agents that speak your data language, understand your business rules, and act on them. It’s not AI as a toy. It’s AI as a business engine.
If you need further information you can contact Sture Aps:
Sture Andersen: sture@stureaps.dk
Jakob Kruse: jk@stureaps.dk