TVI is an in-VPC solution for fast, unmetered embedding inference. Get fastest-in-class embeddings using any private, custom, or open-source models from dedicated embedding servers hosted in your own cloud. Battle-tested by billions of documents and queries.
Hello y'all,
My name is Fede, I am the least technical member of Trieve and proud to announce the launch of our standalone embedding and reranking inference product, Trieve Vector Inference, on Product Hunt.
We've been building AI applications together since late 2022. As we matured and eventually pivoted hard into building infrastructure, we quickly learned what we could and could not control. There were two major bottlenecks to being the performant end-to-end API we are today. The most important one of these was embedding and reranking inference.
Building AI features at scale exposes two critical limitations of cloud embedding APIs: high latency and rate limits. Modern AI applications require better infrastructure.
The platform supports any embedding model, whether it’s your own custom model, a private model, or popular open-source options. You get the flexibility to choose the right model for your use case while maintaining complete control over your infrastructure.
We put together TVI to eliminate these bottlenecks for our own core product. It’s served billions of queries across billions of documents. After requests from others, we’ve sanded it down, wrote up some docs, and are now making it available for all. You can even get it on AWS Marketplace!
Sincerely,
Fede
P.S If you're curious about the other bottleneck, we have a sister launch going on right now, today! as well for PDF2MD, a lightweight and powerful OCR service. Just click on our company profile to check it out (and support it!)
Congrats on the launch of TVI! This looks like a game-changer for embedding inference in the cloud. How do you handle scaling and pricing for different use-cases?
This is a significant advancement for workloads that rely heavily on embedding. @fedchator Great job.
This problem is so Trieve! As I read about "solving bottlenecks" and "building fast APIs for embedding and reranking inference", I couldn't think of any other team that could be behind this. I'm really curious to know how you made the reranking inference so quick—I'll be checking out your repo soon :)
About Trieve Vector Inference on Product Hunt
“Deploy fast, unmetered embedding inference in your own VPC”
Trieve Vector Inference launched on Product Hunt on November 21st, 2024 and earned 176 upvotes and 7 comments, placing #7 on the daily leaderboard. TVI is an in-VPC solution for fast, unmetered embedding inference. Get fastest-in-class embeddings using any private, custom, or open-source models from dedicated embedding servers hosted in your own cloud. Battle-tested by billions of documents and queries.
Trieve Vector Inference was featured in API (98.1k followers), Developer Tools (511.2k followers) and Artificial Intelligence (466.5k followers) on Product Hunt. Together, these topics include over 163.2k products, making this a competitive space to launch in.
Who hunted Trieve Vector Inference?
Trieve Vector Inference was hunted by Federico Chávez Torres. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Reviews
Trieve Vector Inference has received 2 reviews on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.
Want to see how Trieve Vector Inference stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.