This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
opencode-skill-creator
Create and optimize agent skills with evals and benchmarks
opencode-skill-creator is a free, open-source plugin + skill for OpenCode that guides you through the full skill development lifecycle - from drafting to evaluating to optimizing to benchmarking to installing. It's a faithful TypeScript port of Anthropic's official skill-creator for Claude Code, fully rewritten to work with OpenCode's extensibility mechanisms.
Hey everyone! 👋 I'm Anton, the creator of opencode-skill-creator.
I built this because I was frustrated with how hit-or-miss AI agent skills are. You write a skill, test it manually, maybe tweak the description a few times, and hope it works. There was no systematic way to evaluate whether a skill triggers correctly or to measure improvements across iterations.
When I saw Anthropic's skill-creator for Claude Code, I loved the methodology — eval-driven development for AI skills — but it only worked for Claude Code and required Python. So I ported it to TypeScript and packaged it as an OpenCode plugin that anyone can install with one command.
The key insight: skills are software, and software should be tested. The description optimization loop alone was a game-changer for my own skills — it takes a skill from "maybe it'll trigger" to "quantitatively proven to trigger on the right prompts."
How it works:
You describe what skill you want (or use an existing one)
The tool generates test cases automatically
It runs evals — with and without the skill — to measure triggering accuracy
An LLM-powered optimization loop iteratively improves the skill's description
A visual review viewer lets you evaluate quality as a human
You benchmark results with variance analysis across iterations
Technical details:
TypeScript plugin with zero Python dependencies
Registers custom tools in OpenCode (skill_validate, skill_eval, skill_optimize_loop, etc.)
Based on Anthropic's official skill-creator architecture
Apache 2.0 license
Happy to answer any questions! Also looking for contributors and feedback from OpenCode users.
No comment highlights available yet. Please check back later!
About opencode-skill-creator on Product Hunt
“Create and optimize agent skills with evals and benchmarks”
opencode-skill-creator was submitted on Product Hunt and earned 0 upvotes and 1 comments, placing #317 on the daily leaderboard. opencode-skill-creator is a free, open-source plugin + skill for OpenCode that guides you through the full skill development lifecycle - from drafting to evaluating to optimizing to benchmarking to installing. It's a faithful TypeScript port of Anthropic's official skill-creator for Claude Code, fully rewritten to work with OpenCode's extensibility mechanisms.
opencode-skill-creator was featured in Open Source (68.3k followers), Artificial Intelligence (466.4k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 117.8k products, making this a competitive space to launch in.
Who hunted opencode-skill-creator?
opencode-skill-creator was hunted by Anton Gulin. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how opencode-skill-creator stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey everyone! 👋 I'm Anton, the creator of opencode-skill-creator.
I built this because I was frustrated with how hit-or-miss AI agent skills are. You write a skill, test it manually, maybe tweak the description a few times, and hope it works. There was no systematic way to evaluate whether a skill triggers correctly or to measure improvements across iterations.
When I saw Anthropic's skill-creator for Claude Code, I loved the methodology — eval-driven development for AI skills — but it only worked for Claude Code and required Python. So I ported it to TypeScript and packaged it as an OpenCode plugin that anyone can install with one command.
The key insight: skills are software, and software should be tested. The description optimization loop alone was a game-changer for my own skills — it takes a skill from "maybe it'll trigger" to "quantitatively proven to trigger on the right prompts."
How it works:
You describe what skill you want (or use an existing one)
The tool generates test cases automatically
It runs evals — with and without the skill — to measure triggering accuracy
An LLM-powered optimization loop iteratively improves the skill's description
A visual review viewer lets you evaluate quality as a human
You benchmark results with variance analysis across iterations
Technical details:
TypeScript plugin with zero Python dependencies
Registers custom tools in OpenCode (skill_validate, skill_eval, skill_optimize_loop, etc.)
Based on Anthropic's official skill-creator architecture
Apache 2.0 license
Happy to answer any questions! Also looking for contributors and feedback from OpenCode users.
GitHub: https://github.com/antongulin/opencode-skill-creator
npm: https://www.npmjs.com/package/opencode-skill-creator