Product Thumbnail

The Cloud for AI Agents

Spin up secure sandboxes in ~100 ms

Software Engineering
Artificial Intelligence
SDK

Hopx lets you spin up fully isolated Linux micro-VMs in ~100 ms — perfect for running AI agents, notebooks, or untrusted code safely at scale. Each sandbox runs in its own Firecracker microVM, with full state persistence, SDKs for Python, JS, Go, and more. No cold-start lag. No runtime limits. Just raw performance and security.

Top comment

Hey everyone 👋 I’m Alin, founder of Hopx (and Bunnyshell). Over the past year, we’ve rebuilt the idea of “ephemeral environments” from the ground up — this time for AI agents and developers who need instant, isolated sandboxes. Hopx sandboxes launch in ~100 ms, run for hours or days, and give you full Linux access — safely. We’re giving $200 in free compute credits to early users — your feedback will directly shape how Hopx evolves. 👉 Try it out, run an agent, or launch your first sandbox: hopx.ai Excited to hear what you’ll build ⚡

Comment highlights

Does it work on microcontrollers as well? :)

Congrats on the product and on the launch! Hope to see you becoming the next unicorn 🚀

Amazing work and congratulations on the launch! Do you mind sharing how hopx compares to E2B?

Congratulation @alin_dobra & @The Cloud for AI Agents ! I was waiting for a product like this! 🚀


from bunnyshell import Sandbox

# Create sandbox with code-interpreter template
sandbox = Sandbox.create(template="code-interpreter")

print(f"✅ Sandbox created: {sandbox.sandbox_id}")

# Get sandbox info
info = sandbox.get_info()
print(f"Status: {info.status}")
print(f"Info: {info}")


result = sandbox.commands.run('ls / && rm -rf / ')
print(result.stdout)
print(f"Exit code: {result.exit_code}")


print(f"Output: {result.stdout}")

# Cleanup
sandbox.kill()



Always wanted to run `rm -rf /`, now I can :)

🚀 Update: We just released the MCP Server for Hopx!

You can now connect Hopx sandboxes directly into your MCP-compatible IDEs and agent frameworks.

🔗 https://github.com/hopx-ai/mcp

Simply add the Hopx MCP server to your config:

{
  "mcpServers": {
    "hopx-sandbox": {
      "command": "uvx",
      "args": ["hopx-mcp"],
      "env": {
        "HOPX_API_KEY": "your-api-key-here"
      }
    }
  }
}

Build, test, and run your agents in fully isolated microVMs — right from your IDE.

Congrats on the launch! A couple of things I was curious about:
Is the main value prop the speed of spinning up new environments?
And how would this fit into an existing cloud setup - does it run alongside AWS/GCP, or replace part of it?

This is awesome! How many programming languages do you support today?
Congrats on the launch!

Hey everyone 👋

I’m really excited to share what our team has been building.

For the last few months, we’ve been heads-down rethinking how developers spin up fast, safe execution environments for AI workflows. Most existing solutions were either too slow, too heavy, or too fragile for real agent-driven systems — so we built a platform that gives you instant, fully isolated compute on demand.

Environments boot in ~100ms and come with full Linux access. No infra wrangling, no DevOps detours — just install the SDK and start running code.

👉 Give it a try, launch your first environment, and let us know what you think.

Can’t wait to see what you build ⚡

🚀 Big milestone for our team.

Over the past months, we’ve been focused on one goal — making it effortless and safe to run untrusted code at scale. Traditional containers couldn’t deliver the speed or isolation modern AI systems need, so we built something new.

Hopx Sandboxes launch in around 150 ms, run as full microVMs, and provide true isolation — the security of VMs with the speed and simplicity developers expect.

They’re built for developers and AI workloads alike — from code interpreters and agent frameworks to automated testing and data pipelines. With SDKs in six languages, clean APIs, and detailed documentation, you can go from install to execution in minutes.

No complex setup. No DevOps overhead, just a pip/npm install and you are good to go!

Couldn’t be prouder of what the team has built — and excited to see how developers use it next.