Use the src, 'k?
In Which the Author Talks About ksrc, detektifier, and Copilot's Pricing Changes
As I have mentioned before, I do not like agent harnesses with unconstrained Bash tool access, for security reasons among others. My objective is to limit the number of discrete CLI tools that agents need to access, with a hope of eventually configuring an allowlist to constrain what the harness lets an agent run.
As a Kotlin developer, one of the tools that I have been using is ksrc. It offers a CLI or an MCP stdio "server" that searches inside the source code of third-party Gradle dependencies, rather than the agent doing its own unZIPing of source JARs and the like. Or, as the documentation puts it:
With Gradle ecosystems, agents have to take a 15-step journey to download, locate, unpack and ripgrep source jars. ksrc turns 16k tokens wasted on that into 2 CLI commands.
(ripgrep, if you are unfamiliar with it, is an agent-friendly form of a grep-style search tool)
This speeds things up for frontier models and simplifies the work for local models.
So far, ksrc has been working well... when I can convince the agent to actually use it. I tried getting by with just the suggested AGENTS.md line, but I find that agents do not always wind up using ksrc. I will install the skill next and see if that improves matters. The documentation recommends the CLI over MCP, presumably for performance reasons, but it may be that MCP also boosts the likelihood that agents will remember that ksrc is available.
Speaking of tools to help agents be reliable, my detektifier Gradle plugin has been working well for me. It resembles koverGate, but while koverGate helps agents consume Kover report output, detektifier helps agents interpret Detekt reports. Unlike koverGate, detektifier does not offer any sort of "gating" capability where a build succeeds or fails based on some partial result -- a successful build requires a clean Detekt report.
The combination of Kover+kovergate and Detekt+detektifier has helped boost the code quality that I get out of Claude, while also helping to keep the token costs down. And the fact that they are Gradle plugins means they are easy to add for an entire project -- the only per-machine configuration is teaching agent harnesses about the plugins, such as via the supplied skills.
Also, following up from last issue, outlets like Ars Technica confirmed that GitHub is moving to a pure token-based approach for billing for Copilot.
What I found particularly interesting was Dare Obasanjo's note of how much GitHub was subsidizing the use of more-powerful models:
For instance, the price of Claude Opus 4.5 is going up 5x, Claude Opus 4.7 up 9x and GPT 5.4 up 6x.
These sorts of moves are why I expect that AI compute is going to get a lot more expensive and why I am continually trying to improve what I can do with local models.
Add a comment: