Every API key gets its own fuse.
The home for your AI API keys. Set a hard cap, swap one line of code, and we kill the request the millisecond your spend hits the limit. Never wake up to a $1,400 bill again.
POST /v1/chat/completions → api.openai.com status: 200 OK · streaming…
OpenAI's hard limit reports usage 2 hours late.
In a recursive agent loop, that's enough time to burn $1,400. Real stories from the last 12 months:
- $72,000overnight retry loop, indie dev
- $47,000multi-agent stuck for 11 days
- $4,200leaked key drained in 4 hours
- $1,400single bad prompt before bed
Fusebox sits in front of your key as a transparent proxy. We count tokens in real time and return 402 fuse_blown the instant your spend hits the cap you set. No 2-hour delay. No SDK. One line of code.
Spend over time
The gap between the dotted gray line and the red curve is the money you can’t un-spend.
Wired up in 30 seconds.
Fusebox is a drop-in proxy. Your existing code keeps working — just point it at us.
Add your key
Sign in with email, paste your OpenAI or Anthropic key. We encrypt it with AES-GCM before it touches our database.
Set the cap
€5? €50? You decide. Per-fuse hard limits in EUR. Once a fuse blows, every request stops cold.
Swap one line
Change your base_url to api.tryfusebox.dev/v1. Your existing SDK code is unchanged. That's the entire integration.
from openai import OpenAI
client = OpenAI(
- api_key=os.environ["OPENAI_API_KEY"],
+ api_key="fb_live_8a7c92d4...", # your fusebox safe key
+ base_url="https://api.tryfusebox.dev/v1", # the only swap
)
# Everything else stays exactly the same.
client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "ship it"}],
)Cheaper than one bad night.
€12/mo to never wake up to a $1,400 bill again. That's a 116× return on the worst night of your year.
Free
€50/mo coverage
Start free- 1 fuse
- €50/mo total coverage
- OpenAI + Anthropic
- 7-day request history
- No alerts
Pro
or €99/year · save €45
Upgrade to Pro- 10 fuses
- €1,000/mo total coverage
- Email + Telegram alerts at 50/80/100%
- 90-day request history
- Daily / monthly auto-reset
- Pause & resume without losing counter
Lifetime first 50
Pro forever — first 50 only
Get lifetime →- Everything in Pro
- One payment, Pro forever
- All future Pro features included
- Then it’s gone — only 50 of these
Don’t trust us with your key? Audit the code.
The proxy worker, dashboard, and self-host script are all open source under MIT. Fork it, run it on your own Cloudflare account, or just read the encryption code line by line. The hosted version exists so youdon’t have to.
FAQ
Why should I trust Fusebox with my API key?+
You don't have to. The whole codebase is on GitHub, MIT-licensed. Self-host it on your own Cloudflare account in 5 minutes. The hosted version is there for convenience — but every line is auditable.
How accurate is the spend tracking?+
We use the upstream provider's own usage field (returned in every response, including streaming). Drift vs. your real bill is typically under 2%, and we round up — so we'll blow the fuse a fraction early rather than late.
What's the latency overhead?+
Under 50ms p95, globally. We run on Cloudflare Workers — same edge network you're already proxying through. Streaming responses pass through unchanged.
What happens when a fuse blows?+
Your request returns HTTP 402 with a fuse_blown error code. Same schema across providers. Reset the fuse from the dashboard (or call the API), and traffic flows again.
What if Fusebox itself goes down?+
By default we fail-open: if our worker can't be reached, your code talks to OpenAI directly. (Pro users can flip this to fail-closed.) Either way, our uptime is on your status page.
Which models are supported?+
OpenAI and Anthropic at launch — gpt-4o, gpt-4o-mini, o1, claude-3.5-sonnet, claude-3.5-haiku, claude-3-opus, embeddings, plus generic /v1/* passthrough for anything else.
Sleep tonight. Ship tomorrow.
Set up your first fuse in 30 seconds. Free forever for 1 key.