l1m is the easiest way to get structured data from unstructured text or images using LLMs. No prompt engineering, no chat history, just a simple API to extract structured json from text or images.
Hello hunters!
After struggling with complex prompt engineering and unreliable parsing, we built L1M, a simple API that lets you extract structured data from unstructured text and images.
This is actually a component we unbundled from our larger because it was so useful on its own.
It's fully open source (MIT license) and you can:
- Use with text or images (recognizes menu items, receipts, etc.)
- Bring your own model (OpenAI, Anthropic, or any compatible API)
- Run locally with Ollama for privacy
- Cache responses with customizable TTL
The code is at https://github.com/inferablehq/l1m with SDKs for Node.js, Python, and Go.
Would love to hear if this solves a pain point for you or how you might use it!