If you watch AI coding tutorials on YouTube, you probably know the drill: the prompts work perfectly, the code compiles on the first try, and a complex project is finished in minutes. It looks like magic.
But if you are a developer working on real-world architecture, you know that isn’t the reality.
I am starting a new series where I build a full-scale Business Central Webhook integration using AL code and the Azure stack. But I’m doing something different: I’m leaving the filters off. You are going to see the hallucinations, the logic errors, and exactly how I debug a complex ERP integration when the AI gets it wrong.
To do this, I made a major change to my stack: I stopped using GitHub Copilot. Here is why.
The Problem with GitHub Copilot
Don’t get me wrong, Copilot is a fantastic tool. But it has a hidden ceiling for power users.
When you are doing heavy architectural AI work—generating massive blocks of AL code or complex Azure configurations—you inevitably hit fair usage policies.
Basically, if you push it too hard, they throttle you. You run out of “fast tokens,” the responses slow down, and you get stuck waiting for a reset. In a real professional workflow, I cannot afford that bottleneck.
The Solution: OpenRouter + Roo Code
My new workflow relies on two specific tools that solve the throttling issue and add massive functionality.
1. OpenRouter (The Engine) Instead of a flat fee for limited “fast” access, I switched to OpenRouter, which uses a pay-as-you-go model. I pay for exactly what I use. If I need to burn through a million tokens to solve a hard problem, I can do that without hitting an invisible wall.
2. Roo Code (The Driver) To drive OpenRouter, I am using a VS Code extension called Roo Code. This isn’t just a chat window; it is an autonomous coding agent.
The Killer Feature: Multi-Agent Workflow
The real reason I made the switch to Roo Code is the multi-agent workflow. It’s like having a full dev team inside VS Code, where every member is specialized for their specific job.
The true power here is that each agent can run its own specific LLM. You aren’t locked into one model for everything; you can pick the best tool for the task:
- The Orchestrator: This agent acts as the Product Manager, breaking down plans. For this, we can use Gemini, leveraging its massive context window to keep the entire project scope in view.
- Code & Debug Agents: These handle the actual syntax and fixing errors. Here, we can swap to Claude, because it is incredibly fast and produces high-quality code.
- Documentation Writer: When it’s time to write docs, we can assign a custom agent powered by ChatGPT, which excels at natural language and explanations.
This flexibility means we are never compromising. We get the best context, the best code, and the best writing, all working together in one orchestrator-led system.
Join the “Unfiltered” Journey
This series is about learning how to actually work with AI, not just showcasing a magic trick.
I will be recording my screen and sharing the workflow as it happens. If the AI hallucinates a table that doesn’t exist in BC, you’ll see it. If Azure rejects our deployment, we will fix it together.
If you want to see if AI can actually handle a real-world Business Central project—scars and all—subscribe to the channel and follow along.