Product Thumbnail

CRML

CRML is a declaritive language for writing cyberrisk as code

Open Source
Languages
GitHub

We have infrastructure as a code, network as a code but dont have anything as Risk As a Code. CRML is an open, declarative, engine-agnostic and Control / Attack framework–agnostic Cyber Risk Modeling Language. It provides a YAML/JSON format for describing cyber risk models, telemetry mappings, simulation pipelines, dependencies, and output requirements — without forcing you into a specific quantification method, simulation engine, or security-control / threat catalog.

Top comment

I was looking for a cyber risk engine to incorporate in our platform. I was surprised to see that there does not exist one in the entire internet. I went deep to understand, why it does not exist. Then I figured out its because, there is no way someone can write the cyber risks in a machine readable format. There is no declaritive language for this. Thats when I thought of creating this.

CRML started from dozens of messy, real conversations with security leaders, risk teams, and CISOs who kept telling us the same thing:

“We have frameworks… but when the board asks a decision question, we still scramble.”
CRML is our attempt to change that.

It turns scattered assumptions, spreadsheets, and narratives into structured, executable cyber-risk models — so teams can reason about scenarios, trade-offs, and investments with actual clarity instead of gut feel.

We’re launching CRML first because modeling is the foundation. Before dashboards, or automation… organizations need a clean way to think about risk.

We’d genuinely love your feedback:
• What’s broken today in cyber risk analysis?
• Where do models fall apart in practice?
• What would make this actually useful in your day-to-day work?

We’re here in the comments all day — fire away.

Comment highlights

The "Risk as Code" approach is brilliant - moving from spreadsheets to Git-versioned YAML/JSON solves so many problems with audit trails and collaboration. CISOs struggle to give boards concrete answers because risk models are scattered across different tools and people's heads. Making it declarative and engine-agnostic means you're not locked into one framework. How does CRML handle sensitivity analysis? When boards ask "what if X happens", can you fork the model and compare scenarios side by side?

Risk models living in spreadsheets means every assumption is implicit and nobody can diff them. CRML putting FAIR Monte Carlo and Bayesian modeling into the same YAML spec makes those assumptions versioned and reviewable, which is what most GRC tools still don't do. The real test is whether security teams adopt the spec or keep building bespoke models in Python notebooks.