Prompt Joy is an open source tool that let's you do 2 main things: Log Log your llm requests so you can inspect the outputs. Split Test A/B test your prompts with ease to find out which prompts work best.
Hi guys, stoked to present an open source tool to help you debug your LLM prompts in production.
Problem: you create a prompt but how the heck is it performing? Are some users getting really weird results? We couldn't answer that either!
Solution: A simple prompt logger. nothing much to it. it's an api you can simply send the input and output to. No need to integrate with anything. nothing to install. just a simple api logger really.
I've seen a number of solutions that want you to put all your prompt logic in their tool... not only do i not want to do that. I don't necessarily even want Open AI to see all my data.
So this is self-host-able and works with any LLM, not just Open AI. Stand it up in seconds (with docker-compose) or sign up online for a free preview!
Enjoy! 🤗
Hello @andrewpierno1 congrats on launching such a cool product. I hope you have a successful launch.
Do you want to host a live demo for your users today? We are launching our live-streaming SDK product on the 18th of July and giving it free to fellow makers before the launch.
Let me know if you want to try it out. Happy to set it up. Cheers!!