Policy Basics
Learn how to write and manage OPA (Open Policy Agent) policies to control what actions your AI agents can perform.
How Policies Work
Every action submitted to the firewall is evaluated against your policies. Policies are written in Rego, OPA's policy language, and return one of three decisions:
- Allow: Action proceeds immediately
- Deny: Action is blocked with a reason
- Require Approval: Action waits for human approval
Policy Structure
package firewall
# Default: deny all actions
default decision = {"result": "deny", "reason": "No matching policy"}
# Allow read operations to safe domains
decision = {"result": "allow"} {
input.action.operation == "GET"
safe_domain(input.action.params.url)
}
# Require approval for write operations
decision = {"result": "require_approval", "reason": "Write operations need approval"} {
input.action.operation in ["POST", "PUT", "DELETE"]
}
# Helper: Check if URL is in safe domains list
safe_domain(url) {
allowed_domains := ["api.company.com", "internal.company.com"]
some domain in allowed_domains
contains(url, domain)
}Policy Input
Policies receive action data as input. The input structure includes:
{
"action": {
"id": "act_abc123",
"tool": "http_proxy",
"operation": "POST",
"params": {
"url": "https://api.example.com/data",
"method": "POST",
"body": { "key": "value" }
}
},
"agent": {
"id": "customer-service-agent",
"org_id": "org_xyz"
},
"context": {
"timestamp": "2024-12-25T12:00:00Z",
"ip_address": "192.168.1.1"
}
}Common Patterns
Domain Allowlisting
# Allow requests only to approved domains
decision = {"result": "allow"} {
input.action.tool == "http_proxy"
allowed_domain(input.action.params.url)
}
allowed_domain(url) {
allowed := [
"api.company.com",
"internal.service.com",
"api.stripe.com"
]
some domain in allowed
startswith(url, concat("", ["https://", domain]))
}Time-Based Restrictions
# Require approval outside business hours
decision = {"result": "require_approval", "reason": "Outside business hours"} {
input.action.operation in ["POST", "PUT", "DELETE"]
not business_hours
}
business_hours {
time.hour(time.now_ns()) >= 9
time.hour(time.now_ns()) < 17
time.weekday(time.now_ns()) >= 1
time.weekday(time.now_ns()) <= 5
}Rate Limiting
# Require approval if too many requests
decision = {"result": "require_approval", "reason": "Rate limit exceeded"} {
input.context.request_count_1h > 100
}Policy Priority
When multiple policies could apply, they are evaluated in priority order (lowest number = highest priority). The first matching decision wins.
| Priority | Policy | Description |
|---|---|---|
| 1 | Security Blocks | Block known malicious patterns |
| 10 | Org Overrides | Organization-specific rules |
| 100 | Default Policy | Standard allow/deny/approve rules |
Testing Policies
Use the policy simulator to test your policies before deploying:
# Via API
curl -X POST https://api.agentactionfirewall.com/admin/policies/:id/simulate \
-H "Authorization: Bearer $SUPABASE_JWT" \
-d '{
"input": {
"action": {
"tool": "http_proxy",
"operation": "POST",
"params": { "url": "https://api.example.com" }
}
}
}'AI-Powered Policies
For semantic analysis beyond pattern matching, use NLP policies powered by LLMs. These can detect harmful content, PII, and malicious intent that traditional rules-based policies might miss.
- Content Safety: Detect harmful, toxic, or inappropriate content
- PII Detection: Identify personally identifiable information
- Intent Classification: Detect suspicious or malicious intent
Learn more about NLP policies →