Pythagora is Now Agentic

Pythagora is Now Agentic

Pythagora went agentic last week. This is the biggest leap we've made since launching v1.

Our first agent is built on Claude Code (Sonnet) and solves one of the biggest frustrations with LLM development tools–getting stuck in endless error loops and repeated responses.

We've been using the agentic model internally and wanted to share our initial impressions from six engineers on the team.

Initial impressions from the team

Here are some observations from our engineers:

Leon

The AI has much better focus in the codebase and has fewer hallucinations. As a user, there's less regression - losing old features was a big problem before. As a developer, I love how it simplifies the codebase and makes future improvements very easy to add. The reduced regression has been the most noticeable improvement. You are not aware how much the world is changing.

Zvonimir

The results are just much better, both for first try implementation of new features and debugging. Most notably, the integration of 3rd party services are working much better now and the debugging loops might be completely fixed where the agent will iterate until the bug is fixed. I like that it creates tests to see if it's working, which makes me more confident in the implemented solution. One surprising moment was when it stumbled upon a problem in my local environment unrelated to Pythagora, fixed my environment, and continued working on the app. Just try it with something you weren't able to implement before.

Sven

For me, it's all about correctness. For major requests like changing the theme of the app, we actually get the whole new implementation of the theme. My favorite feature is the ability to chat with the agent, as well as implementing the whole task correctly at first try. What surprised me most is that it works as well as Claude Code, or even better in some situations. It's like Claude Code, but better and has better UI.

Marko

Claude Code finds context much better through reading files and executing commands, allowing even the same LLM model to perform much better, especially when projects scale after first iterations. The agent has environment access, allowing a wide variety of tools to integrate and test out of the box. Performance of debugging is incredible - first iteration almost always results in a fix, or at least logging. Never get stuck again - before with LLMs and prompts, you could get stuck with repeated responses, but the agent is much better in this regard.

Cody

It seems to rely on tools well and feels more precise. It's definitely capable of understanding and acting on a wider variety of requests. Some precise, quick solutions I hadn't thought of yet! Imagine the LLM worked in the field a while and came back more experienced - knowing how to use an IDE.

Jeeshu

The flexibility of moving to whatever part of the app with the agent understanding the current state and working seamlessly is incredible. It feels like a one-man army - handling development, testing, and chatting to understand context all at once. I tried asking it for testing instructions and flows, and it had brilliant answers. The way it handles testing and reduces regressions before they reach users has been the biggest surprise.

Looking ahead

We've found the agentic model delivers exceptional first-try implementations and debugging workflows that exceed our expectations. As one engineer put it: "You are not aware how much the world is changing."

This foundation opens possibilities for multiple specialized agents ahead.

Try Pythagora's agentic model.

-The Pythagora Team