Monitor is a comprehensive set of evaluations for video models aimed at creative teams looking at qualitative measures. Things such a visual fidelity, cinematography a well as industry specific metrics like copyright compliance and diversity measures. We aim to be the bloomberg of video ai!
I've been running internal evaluations in my work and studio to improve video workflows i deploy for myself and my clients and I'm publishing them to help video creators and their teams make better video tool and model choices in the video ai era!
To add to the point about the ecosystem moving fast — the qualitative measures angle is what makes this stand out to me. Most benchmarks I've seen lean heavily on technical metrics, but "does this actually look cinematic" is often the real question creative teams are asking. Curious how you're handling the subjectivity there — is it human eval panels, or are you using AI judges for things like cinematography scores?
@sherif_higazy1 The video AI ecosystem is moving so fast that it’s difficult to know which models actually perform better. Having benchmarks and leaderboards in one place makes a lot of sense. Congrats on launching