This product was not featured by Product Hunt yet.
It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).

Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

falsify

Pre-register your ML accuracy claims

A single-file Python CLI that hashes your ML accuracy claim with SHA-256 before the experiment runs. Edit the threshold afterwards → CI exits 3, the lie is mechanically blocked. MIT, on PyPI: pip install falsify.

Top comment

hi PH 👋 shipped this for the @AnthropicAI hackathon — falsify, a tiny python CLI that does one thing: it pre-registers your ML accuracy claim with SHA-256 before the experiment runs. if anyone silently edits the threshold afterwards, the next run exits 3 and CI refuses to produce a verdict. the honest amendment path is `lock --force` which writes a new audit entry. both paths are legible; only one is silent. three days, single file, 3925 LOC, 518 tests, MIT. uses claude code skills + an MCP server + 2 forked-context subagents. honesty score 1.00 (yes, falsify uses itself in CI on every push). design choices i'd love feedback on: — canonical YAML for hashing (vs JSON-LD / RFC 8785 JCS) — landed on sorted-key YAML for human readability — deterministic verdict, no LLM in the verdict step — if the model judges its own claim, integrity collapses — exit-code contract: 0 PASS, 10 FAIL, 3 tampered, 11 guard violation — CI gates on these directly why now: EU AI Act high-risk regime kicks in 2 august 2026. article 12 (logging) and article 18 (10-year tech-doc retention) require exactly this kind of cryptographically-anchored audit trail. happy to dig into the YAML canonicalization, the threat model (docs/ADVERSARIAL.md lists 8 defended attacks + 6 explicitly out of scope), or why this composes with W&B/MLflow rather than replacing them. ask me anything. — pip install falsify · falsify.dev · 90s demo: https://youtu.be/vVZTNeak5PA

About falsify on Product Hunt

Pre-register your ML accuracy claims

falsify was submitted on Product Hunt and earned 3 upvotes and 1 comments, placing #151 on the daily leaderboard. A single-file Python CLI that hashes your ML accuracy claim with SHA-256 before the experiment runs. Edit the threshold afterwards → CI exits 3, the lie is mechanically blocked. MIT, on PyPI: pip install falsify.

On the analytics side, falsify competes within Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1M followers on Product Hunt. The dashboard above tracks how falsify performed against the three products that launched closest to it on the same day.

Who hunted falsify?

falsify was hunted by Cuneyt Ozturk. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

For a complete overview of falsify including community comment highlights and product details, visit the product overview.