Cekura is an end-to-end QA for Voice & Chat AI Agents. Cekura helps Conversational AI companies with pre-production testing and simulation as well as monitoring of production calls to ensure quality and reliability at every stage of development
Cekura lets you simulate, evaluate and monitor your Voice & Chat AI agents automatically.
Why did we build Cekura? 💡
Cekura was born out of our own frustration building voice agents for healthcare, where every change required hours of manual QA, yet critical failures still made it to production. We built the platform we wished existed: one that simulates conversations at scale, generates edge-case scenarios, and monitors real-world agent calls for failures.
Team Background 👥
Shashij has published a paper on AI systems testing from his research at ETH Zurich and Google. Tarush has developed simulations for ultra-low latency trading, and I have led product and growth teams before, including a conversational AI company. All of us met at IIT Bombay and have been friends for the last 8 years.
Problem🚨: Making Conversational AI agents reliable is hard. Manually calling/chatting with your agents or listening through thousands of conversations is slow, error-prone and does not provide the required coverage.
Our Solution: At Cekura, we work closely with you at each step of the agent-building journey and help you improve and scale your agents 10 times faster
Key Features:
Testing:
Scenario Generation: Create varied test cases from agent descriptions automatically for comprehensive coverage.
Evaluation Metrics: Track custom and AI-generated metrics. Check for instruction following, tool calls, and conversational metrics (Interruptions, Latency, etc).
Prompt Recommendation: Get actionable insights to improve each of the metrics.
Custom Personas: Emulate diverse user types with varied accents, background noise, and conversational styles.
Production Call & Chat Simulation: Simulate production calls to ensure all the fixes have been incorporated.
Instruction Following: Identify instances where agents fail to follow instructions.
Drop-off Tracking: Analyzes when and why users abandon calls, highlighting areas of improvement.
Custom Metrics: Define unique metrics for personalized call analysis.
Alerting: Proactively notifies users of critical issues like latency spikes or missed functions.
Major Updates Since Last Product Hunt Launch:
Added Chat AI Testing and Observability
Automated Expected Outcome along with each generated scenario
Simulation of Production conversations
'Instruction Following' and 'Hallucination' metric to automatically flag deviations from Agent description and Knowledge base respectively
Who is this for?
Anyone building Conversational AI agents. If you want to make your voice & chat AI agents reliable, book my calendar here 🗓️ or reach out to [email protected]📧.
If you'd like to engage in a fun roleplay, you can talk with our agent here: You will act as a customer support representative and our agent will call you for a refund, order status, and product recommendation. After the call, we will give you an evaluation.
Please note: In reality, we generate hundreds of simulations automatically and provide detailed analytics on your AI agent's performance as demonstrated in the demo video.