This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
opencode-skill-creator
Create and optimize agent skills with evals and benchmarks
opencode-skill-creator is a free, open-source plugin + skill for OpenCode that guides you through the full skill development lifecycle - from drafting to evaluating to optimizing to benchmarking to installing. It's a faithful TypeScript port of Anthropic's official skill-creator for Claude Code, fully rewritten to work with OpenCode's extensibility mechanisms.
Hey everyone! 👋 I'm Anton, the creator of opencode-skill-creator.
I built this because I was frustrated with how hit-or-miss AI agent skills are. You write a skill, test it manually, maybe tweak the description a few times, and hope it works. There was no systematic way to evaluate whether a skill triggers correctly or to measure improvements across iterations.
When I saw Anthropic's skill-creator for Claude Code, I loved the methodology — eval-driven development for AI skills — but it only worked for Claude Code and required Python. So I ported it to TypeScript and packaged it as an OpenCode plugin that anyone can install with one command.
The key insight: skills are software, and software should be tested. The description optimization loop alone was a game-changer for my own skills — it takes a skill from "maybe it'll trigger" to "quantitatively proven to trigger on the right prompts."
How it works:
You describe what skill you want (or use an existing one)
The tool generates test cases automatically
It runs evals — with and without the skill — to measure triggering accuracy
An LLM-powered optimization loop iteratively improves the skill's description
A visual review viewer lets you evaluate quality as a human
You benchmark results with variance analysis across iterations
Technical details:
TypeScript plugin with zero Python dependencies
Registers custom tools in OpenCode (skill_validate, skill_eval, skill_optimize_loop, etc.)
Based on Anthropic's official skill-creator architecture
Apache 2.0 license
Happy to answer any questions! Also looking for contributors and feedback from OpenCode users.
“Create and optimize agent skills with evals and benchmarks”
opencode-skill-creator was submitted on Product Hunt and earned 0 upvotes and 1 comments, placing #317 on the daily leaderboard. opencode-skill-creator is a free, open-source plugin + skill for OpenCode that guides you through the full skill development lifecycle - from drafting to evaluating to optimizing to benchmarking to installing. It's a faithful TypeScript port of Anthropic's official skill-creator for Claude Code, fully rewritten to work with OpenCode's extensibility mechanisms.
On the analytics side, opencode-skill-creator competes within Open Source, Artificial Intelligence and GitHub — topics that collectively have 575.9k followers on Product Hunt. The dashboard above tracks how opencode-skill-creator performed against the three products that launched closest to it on the same day.
Who hunted opencode-skill-creator?
opencode-skill-creator was hunted by Anton Gulin. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of opencode-skill-creator including community comment highlights and product details, visit the product overview.
Hey everyone! 👋 I'm Anton, the creator of opencode-skill-creator.
I built this because I was frustrated with how hit-or-miss AI agent skills are. You write a skill, test it manually, maybe tweak the description a few times, and hope it works. There was no systematic way to evaluate whether a skill triggers correctly or to measure improvements across iterations.
When I saw Anthropic's skill-creator for Claude Code, I loved the methodology — eval-driven development for AI skills — but it only worked for Claude Code and required Python. So I ported it to TypeScript and packaged it as an OpenCode plugin that anyone can install with one command.
The key insight: skills are software, and software should be tested. The description optimization loop alone was a game-changer for my own skills — it takes a skill from "maybe it'll trigger" to "quantitatively proven to trigger on the right prompts."
How it works:
You describe what skill you want (or use an existing one)
The tool generates test cases automatically
It runs evals — with and without the skill — to measure triggering accuracy
An LLM-powered optimization loop iteratively improves the skill's description
A visual review viewer lets you evaluate quality as a human
You benchmark results with variance analysis across iterations
Technical details:
TypeScript plugin with zero Python dependencies
Registers custom tools in OpenCode (skill_validate, skill_eval, skill_optimize_loop, etc.)
Based on Anthropic's official skill-creator architecture
Apache 2.0 license
Happy to answer any questions! Also looking for contributors and feedback from OpenCode users.
GitHub: https://github.com/antongulin/opencode-skill-creator
npm: https://www.npmjs.com/package/opencode-skill-creator