This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
Peer
Peer answers health questions using real clinical evidence.
Peer answers health questions using real clinical evidence. It searches 9 medical databases, cites every claim to a specific study, and verifies each citation against its source. Evidence is graded by study design using fixed rules. Every response is independently scored for accuracy across six dimensions, and we test the same questions against leading AI tools to measure where we stand.
This started from a simple frustration. Every time I looked into a health discussion, the answers were mostly anecdotes. Rarely actual studies, and almost never any context on how strong the evidence really is. So I built something I wanted to use myself.
Peer searches 9 medical databases as you ask the question. That includes PubMed, ClinicalTrials.gov with trial outcomes, openFDA, DailyMed, PubChem, FDA Orange Book, FDA UNII for substance identity, and major health organizations like WHO, CDC, NHS, and Mayo Clinic. It then writes a clear answer backed by actual studies.
The important part is what happens after:
Every citation is checked against the original source
Evidence is graded by study design using fixed rules
A separate system scores each response for accuracy
On evaluation, there is no standard benchmark for medical research retrieval, so we built our own using approaches adapted from existing healthcare AI evaluations. We use a set of 160 curated questions across 40+ categories, from supplements and drug safety to adversarial edge cases and prompt injection attempts.
Each answer is scored across six dimensions: factual accuracy, citation grounding, completeness, honest uncertainty, safety, and clarity. If an answer fails factual accuracy, it gets a zero, no matter how good the rest is. We also run a claim level verification step. Every answer is broken into individual claims, stripped of formatting, and each claim is independently checked against sources like PubMed.
We run the same questions against ChatGPT and Claude with web search enabled. Peer consistently performs best on our benchmark. We are not claiming perfection. We are saying we measure rigorously, and the results give us confidence.
If you try it, I would love to know where it feels unclear, where you do not trust it, or where it breaks.
“Peer answers health questions using real clinical evidence.”
Peer was submitted on Product Hunt and earned 4 upvotes and 1 comments, placing #40 on the daily leaderboard. Peer answers health questions using real clinical evidence. It searches 9 medical databases, cites every claim to a specific study, and verifies each citation against its source. Evidence is graded by study design using fixed rules. Every response is independently scored for accuracy across six dimensions, and we test the same questions against leading AI tools to measure where we stand.
On the analytics side, Peer competes within Artificial Intelligence, Search and Medical — topics that collectively have 488.7k followers on Product Hunt. The dashboard above tracks how Peer performed against the three products that launched closest to it on the same day.
Who hunted Peer?
Peer was hunted by Umar ElBably. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of Peer including community comment highlights and product details, visit the product overview.
Hey everyone, Umar here. I built Peer.
This started from a simple frustration. Every time I looked into a health discussion, the answers were mostly anecdotes. Rarely actual studies, and almost never any context on how strong the evidence really is. So I built something I wanted to use myself.
Peer searches 9 medical databases as you ask the question. That includes PubMed, ClinicalTrials.gov with trial outcomes, openFDA, DailyMed, PubChem, FDA Orange Book, FDA UNII for substance identity, and major health organizations like WHO, CDC, NHS, and Mayo Clinic. It then writes a clear answer backed by actual studies.
The important part is what happens after:
Every citation is checked against the original source
Evidence is graded by study design using fixed rules
A separate system scores each response for accuracy
On evaluation, there is no standard benchmark for medical research retrieval, so we built our own using approaches adapted from existing healthcare AI evaluations. We use a set of 160 curated questions across 40+ categories, from supplements and drug safety to adversarial edge cases and prompt injection attempts.
Each answer is scored across six dimensions: factual accuracy, citation grounding, completeness, honest uncertainty, safety, and clarity. If an answer fails factual accuracy, it gets a zero, no matter how good the rest is. We also run a claim level verification step. Every answer is broken into individual claims, stripped of formatting, and each claim is independently checked against sources like PubMed.
We run the same questions against ChatGPT and Claude with web search enabled. Peer consistently performs best on our benchmark. We are not claiming perfection. We are saying we measure rigorously, and the results give us confidence.
If you try it, I would love to know where it feels unclear, where you do not trust it, or where it breaks.
Check it out for free: frompeer.com