Product Thumbnail

Cyris

Turns every AI decision into audit-ready evidence

API
Artificial Intelligence
GitHub
Alpha

Your AI agents make thousands of decisions a day. Can you prove what they did? Cyris auto-instruments 12+ LLM providers (OpenAI, Anthropic, Bedrock, Gemini, Ollama, more) with zero code changes. Every decision is logged into a tamper-proof, hash-chained audit trail. When a hospital sends a 200-question security questionnaire, Cyris auto-fills the answers from your real agent data in 90 seconds. Two lines of code. Agents discovered in 10 seconds. Your next compliance review writes itself.

Top comment

Hey PH 👋 I'm Krish, 19, building Cyris with two cofounders from Stanford and Columbia. I spent 13 months at a cancer research institute watching healthcare AI startups lose hospital deals — not because their agents didn't work, but because nobody could prove what those agents did. The compliance officer asks "show me what your AI decided for this patient" and the founder has nothing. We built Cyris to fix that. Two lines of code, zero config — every LLM call is auto-captured into a hash-chained audit trail. What makes it different from observability tools like Helicone or LangSmith: → Tamper-proof hash chain on every entry — cryptographic proof, not just logs → Auto-fills security questionnaires from real operational data in 90 seconds → Shareable live trust center URL — hospitals check your compliance posture anytime, no login → Forensic trace — when something breaks, reconstruct the full decision chain across agents in seconds → Auto-instruments 12+ providers including OpenAI, Anthropic, Bedrock, Gemini, and Ollama — zero code changes We're healthcare-first but the SDK works for any AI agent system. If your agents make decisions someone will eventually ask about, Cyris records the answer before the question arrives. Would love honest feedback — what would make this more useful to you? Try it live: cyrisai.dev

Comment highlights

We built TAB Platform for the other side of this problem, verifying what agents CAN do before deployment. You prove what they DID after. Both layers are needed. Congrats on launch.

Hi all, I'm Mingchuan, first-year CS+Math student at Stanford. I have a lot of background in academic research, having worked for 2.5 years at the Utah Neurorobotics Lab, to build better prosthesis control systems. I'm currently working at the NLP group at the Stanford AI Lab, doing RL research for advanced mathematical reasoning.

Through my experiences, I've realized how difficult it is to build and manage agentic workflows. Researchers often have to repeat tedious data collection processes that could be automated with these workflows, but building and maintaining them is complicated. Excited to change how teams orchestrate and trust autonomous workflows with Cyris

Hello! I'm David, a current Columbia CS student. I'm super excited to be building Cyris with my co-founders!

My background is a bit different from Krish's. I've been embedded on the technical side of biotech startups and incubators, which means I've seen firsthand how AI agents get built fast and audited slowly. The compliance gap isn't hypothetical; it shows up the moment a hospital's IT team asks for a system trace and the engineering team has to piece something together from scattered logs.

That frustration is a big reason I'm building Cyris. The hash-chained audit trail is something I wish every team I worked with had from day one. Happy to go deep on the technical architecture if anyone's curious!