Product Thumbnail

Arch

Build fast, hyper-personalized agents with intelligent infra

Open Source
Developer Tools
Artificial Intelligence

Arch is an intelligent infrastructure primitive to help developers build fast, personalized agents in mins. Arch is a gateway engineered with LLMs to seamlessly integrate prompts with APIs, and to transparently add safety and tracing features outside app logic

Top comment

Hello PH! My name is Salman and I work on Arch - an open source infrastructure primitive to help developers build fast, personalized agent in minus. Arch is an intelligent prompt gateway engineered with (fast) LLMs for the secure handling, robust observability, and seamless integration of prompts with your APIs - all outside business logic. Arch is built on (and by the contributors of) Envoy with the belief that: Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic. Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way. ⭐ Core Features: 🏗️ Built on Envoy: Arch runs alongside application servers, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs. 🤖 Function Calling: For fast agentic and RAG apps. Engineered with SOTA.LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function calling, and parameter extraction from prompts. Our models can run under <200 ms!! 🛡️ Prompt Guard: Arch centralizes prompt guards to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. 🚦 Traffic Management: Arch manages LLM calls, offering smart retries, automatic cut over, and resilient upstream connections for continuous availability between LLMs or a single LLM provider with multiple versions 👀 OpenTelemetry Tracing, Metrics and Logs : Arch uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with exiting observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance. - Visit our Github page to get started (and ⭐️ the project 🙏) : https://github.com/katanemo/arch - To learn more about Arch our docs: https://docs.archgw.com/ A big thanks 🙏 to my incredibly talented team who helped us to our first milestone as we re:invent infrastructure primitives for Generative AI.

Comment highlights

Cool demo. Looks very promising and I'd love to give it a try over the weekend. Congratulations on the launch folks 🚀🚀

I’m intrigued by the Prompt Guard feature. Ensuring safe user interactions is crucial, and having something handle that automatically sounds like it could simplify things a lot for devs focused on security. @adil_hafeez

I can see Arch being super helpful for streamlining complex prompt flows. @shuguang_chen

Congrats, Salman and the Arch team! 🎉 Arch sounds like a powerful solution for making prompt management and observability in AI-driven applications much more efficient. Love the focus on secure handling, robust traffic management, and fast function calling—perfect for anyone scaling agent-based and RAG apps.

Congrats on your launch 🚀🚀🚀 --- Arch seems cool and promising. I haven't tried it out yet, but I do have a few questions. Langchain currently dominates this space, I see your pitch, but Arch doesn't seem to fit in the toolbox for users new to building GenAI apps. It does have its benefits for prompt-heavy applications, but I don't see the appeal for users new to building GenAI apps except that these folks aren't your target audience.

Hi I am Jose, current owner of Pukka Built. I contracted with Katanemo to help the team get Arch off the ground. It was great to bootstrap Arch alongside Adil and Salman as they get to launch it for developers and platform teams. As someone who built Envoy at Lyft, I can attest to the durability of Envoy as a design choice. For those who need efficient, reliable handling of LLM requests, Arch is a strong addition to the stack. For the same reasons the out of process architecture for Envoy was a solid design choice, Arch benefits as well: giving teams more control over how prompts are managed without impacting existing services.

Congrats on the launch. Would love to try it, especially the prompt guard. Onward!

This is a cool project and great to see it come to fruition. We are working on a GenAI-based tool that will involve a front-end of sorts (lets call it an agent), and likely leverage several LLMs on the back-end - depending on the type of request. Is this a good use case for Arch and if so, I'd love to get my team engaged here. Thanks @salman_paracha and team!

Congratulations to the team on the launch of Arch! This tool sounds exciting for developers. Building fast, personalized agents in minutes is impressive

Hello! My name is Adil Hafeez, and I am the Co-Founder at Katanemo and the lead developer behind Arch. Previously I worked on Envoy at Lyft. Arch is engineered with purpose-built LLMs, it handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way - all outside business logic. Here are some additional key details of the project, * Built on top of Envoy and is written in rust. It runs alongside application servers, and uses Envoy's proven HTTP management and scalability features to handle traffic related to prompts and LLMs. * Function calling for fast agentic and RAG apps. Engineered with purpose-built fast LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts. * Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. * Manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability. * Uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates, helping optimize AI application performance. We love arch, love open source and would love to build alongside the community. Please leave a comment or feedback here and I will be happy to answer!

Congrats on the launch! Really great project -- I believe in the premise of a gateway that consolidates a lot of the infrastructure work needed for any LLM project.

Impressive work - At Meta we have the same core belief that safety of agents is paramount and as much as possible if we can tackle those concerns early in the request path the better - Arch feels like a great fit for responsible and safe AI - not to mention the other super powers it offers developers. One quick question: can you elaborate more about the prompt guard model, I see that you guys fined tuned it over the prompt guard from Meta?"

Congrats, Salman! Sounds awesome! I’m following this project on GitHub. Keep it going! 🚀

Congrats on the launch! Love the speed and ease Arch brings to building personalized agents. Quick question: How does Arch handle data privacy and security?

This is awesome! Arch is a game-changer for building personalized agents. I love the idea of using Envoy as the foundation, as it's known for its scalability and reliability. The focus on prompt safety and observability is crucial for building trustworthy AI systems. I'm particularly excited about the fast function calling and parameter extraction capabilities – this will be a huge time-saver. I'm definitely going to check out the docs and give Arch a spin!

What's a personalised agent? Web chatbot or personal assistant or? This field is moving so fast, it's hard to know what terms means these days. Thanks 🙏