Octrafic is an open-source CLI for API testing. Point it at any OpenAPI spec or live endpoint, describe what you want to test in plain English, and let it handle the rest - from generating requests to validating responses and exporting a PDF report. No test scripts, no GUI, no mocks. Just a single binary. Works with OpenAI, Claude, Ollama, and any OpenAI-compatible provider
I built Octrafic to make API testing simpler - no test scripts, no GUI, no mocks.
Point it at any API, describe what you want to test in plain English, and the AI agent handles the rest - planning scenarios, running real requests, validating responses, and exporting results.
What it can do:
- Describe tests in plain English - no boilerplate, no config files
- Generate an OpenAPI spec from your source code
- Run in CI/CD pipelines non-interactively with a single command
- Export tests to Postman, curl, or pytest to use in your existing toolchain
- Export PDF reports
- Works with any LLM - OpenAI, Claude, Ollama, llama.cpp. You bring your own key, nothing goes through my servers.
Single binary, no runtime dependencies, fully open-source under MIT.
Really cool approach to API testing. The "describe what you want to test in plain English" workflow is such a natural fit — writing and maintaining test scripts for every endpoint is one of those tasks that everyone knows is important but nobody actually enjoys doing. The fact that it generates a PDF report at the end is a nice touch too, super useful for sharing results with non-technical stakeholders. How does it handle authentication flows — like chained requests where you need a token from one endpoint to test another?
All these IDEs are the same, but you write that they are different, but they are one and the same.
Interesting approach using LLMs to generate + validate test flows directly from OpenAPI specs. How are you thinking about reproducibility across runs?
CLI tools for API testing have always felt
like they need a PhD to configure properly.
Plain English interface is the right call.
The OpenAPI spec support is what caught my
eye - been dealing with API validation while
building an automation tool and writing test
scripts manually is genuinely painful.
Does Octrafic handle auth flows well?
Things like OAuth tokens, API key rotation
mid-session - that's usually where CLI
testing tools fall apart in my experience.
Also open-source is a big green flag.
Will definitely be exploring this!
Hey Mikołaj, that line about no scripts, no GUI, no mocks says a lot about what was frustrating you. Was there a specific project where you spent more time setting up the test infrastructure than actually testing the API?
Hey everyone! 👋
I built Octrafic to make API testing simpler - no test scripts, no GUI, no mocks.
Point it at any API, describe what you want to test in plain English, and the AI agent handles the rest - planning scenarios, running real requests, validating responses, and exporting results.
What it can do:
- Describe tests in plain English - no boilerplate, no config files
- Generate an OpenAPI spec from your source code
- Run in CI/CD pipelines non-interactively with a single command
- Export tests to Postman, curl, or pytest to use in your existing toolchain
- Export PDF reports
- Works with any LLM - OpenAI, Claude, Ollama, llama.cpp. You bring your own key, nothing goes through my servers.
Single binary, no runtime dependencies, fully open-source under MIT.
Give it a shot and let me know what you think.