How we started with Pythagora and how we failed with 4 different products before it
As a developer, I'm always on the lookout for new tools and technologies that can make my work easier, more efficient, and enjoyable. So, after taking a 1-year break after my last company was acquired, I decided to build the next product with my best friend.

As a developer, I'm always on the lookout for new tools and technologies that can make my work easier, more efficient, and enjoyable. So, after taking a 1-year break after my last company was acquired, I decided to build the next product with my best friend.
So, over the last year, we created four different dev tools in search of the perfect fit. We tested all of them on the market and while each tool had its strengths and weaknesses, none of them quite hit the mark until Pythagora. With over 400 stars on Github in just two weeks and 2k upvotes on Reddit, Pythagora proved to be the solution we had been searching for. Join me as I take you through my experience of discovering and testing these dev tools and why Pythagora stood out from the rest.
Iteration #1: Responsiveye

We started with a problem we knew all too well from our time leading the tech team behind FaZe Clan (fazeclan.com), a global gaming enterprise where we frequently released new pages. Despite having the design, we found that due to tight deadlines, our team rarely had the time to properly test for visual bugs. That's where the idea for responsiveye.com came in. We thought of a tool where we could upload a Figma design and a machine learning model could predict how the page should look on various resolutions, highlighting any visual bugs before new changes were deployed. First, we showed a demo video to 10ish people and it seemed like people felt the problem.
After coding for 3 months, we went back and talked to people again but after around 100 people (devs and QAs) from dev shops and startups the results weren't so good. We discovered that most didn't see visual bugs as a significant enough problem to prioritize. While they were polite and interested, it wasn't enough to drive them to sign up on the website to get updates about Responsiveye which was a clear indication for us that this doesn't have a product-market-fit.
One feedback that was consistent across all people we talked to was that they just don't have time to watch for visual bugs since they are always focused on a site functionality.
Iteration #2: Responsiveye functional testing

After receiving feedback on the initial version of responsiveye, we realized that functional bugs were a much more pressing concern for most users. As a result, we decided to focus on developing a new feature that could detect and highlight any functional bugs on a web page. This new feature, which we dubbed "Responsiveye functional testing," involved creating a prototype that could autonomously click around and test if all buttons and forms on a page were working correctly.
Although we found that this new feature was effective in identifying meaningful bugs (eg. a form on a site not working), we encountered two major problems in the testing process. Firstly, the tests were incredibly slow - it would take around an hour to test just one page. Secondly, because the algorithm simply tried random actions on the page, users were unable to discern which specific actions had been tested, making it difficult to track the testing process. The second problem was a deal breaker for most users since both devs and QAs wanted to know what was tested by the automated tests so it was clear that we needed to make significant changes to our testing approach.
We needed to make the testing fast and our users need to be able to see exactly what each test did.
Iteration #3: Responsiveye functional testing from user interactions

To solve problems we got as feedback from people in iteration #2, we sketched a tool that creates functional tests based on user actions, rather than simply clicking around the site at random. To achieve this, we envisioned a snippet that would be embedded on a website, similar to Google Analytics integration. This snippet would track user actions (like Hotjar or Fullstory do) and provide us with the exact steps that users are doing from which we can create E2E tests.
Our new approach was well-received by people, and we were excited to move forward with this new strategy. However, we soon discovered a significant technical obstacle. Many tests were dependent on the user's current state that's stored in a database. For example, if a user has a premium subscription, they will have different buttons to click on the site. Without control over the database state, it was difficult to replicate these conditions for testing purposes.
We explored different possibilities for replicating user states, such as using test accounts or creating custom scripts to simulate different scenarios. However, none of these approaches proved to be entirely effective or scalable.
To solve this problem, we needed to go to the backend and build a tool that would have a control over the entire backend system along with the database. However, after 9 months of failed iterations, we were sceptical if we can do it.
Iteration #4: SafetyTest

Being depressed about the time and money spent on testing different products, we decided to create something simple and straightforward that could provide clear value to users. The result was safetytest.io, a Shopify app that automates the testing of a store's functions, such as adding items to the cart or completing the checkout process, without people having to write tests, create testing environments, etc.
While SafetyTest did not present any significant issues or technical challenges, we didn't see it being a huge success one day. We still wanted to continue with the path we started and a bigger vision of fully automating automated testing by building a backend testing tool. So, we developed it in parallel with the backend testing tool (which eventually turned into Pythagora) which we wanted to see if it was feasible.
We started to get traction from SafetyTest but we also discovered that backend testing tool was feasible to create as well. So, after working on both project in parallel (marketing for SafetyTest and building Pythagora) for a while, we decided to launch Pythagora.
When we launched it on Reddit, we were delighted to receive overwhelmingly positive feedback from the community. In 2 weeks, we received over 2000 upvotes on Reddit and over 400 stars on Github.
Iteration #5: Pythagora

After extensive research, experimentation, and iteration, we finally created Pythagora, a tool designed for backend developers and QAs that generates automated integration tests in seconds by recording server activity without the need for writing any code. With Pythagora, users can achieve up to 80% code coverage within just 30 minutes of using the tool and 90% within an hour.
As a result of the Pythagora launch, we decided to stop further development on SafetyTest and focus entirely on the continued growth and improvement of Pythagora. We continue to invest heavily in research and development, working tirelessly to improve the tool's capabilities and performance.
Currently, Pythagora is quite limited and it supports only Node.js apps with Express and Mongo database but we're working on supporting more databases (PostgreSQL likely coming out soon) and frameworks.
How Pythagora works (3 min)
Conclusion
If you really came this far, thank you so much for taking the time to read about our journey in creating Pythagora. We hope that our story has been informative and inspiring and that it has shed some light on the challenges and opportunities involved in developing new dev tools.
We believe that Pythagora has the potential to transform the way developers and QAs approach testing, enabling them to achieve higher levels of efficiency and accuracy than ever before. If you share our vision and are excited about what Pythagora can do, we invite you to show support by giving us a star on Github and/or stay updated on the latest developments with Pythagora by entering your email address here to receive updates and news.
Thanks again for your support, and we look forward to continuing to push the boundaries of what's possible in the world of automated testing and development.