Praesidia Agent Identity, Verification & Guardrails
Verify AI agents, check trust scores (0-100), discover marketplace agents, and apply guardrails for security and compliance.
Core Capabilities
- Verify agents - Check if an agent is registered, verified, and trustworthy
- Trust scores - View 0-100 trust ratings and verification status
- Agent discovery - Search marketplace for public agents by capability
- Guardrails - Apply security policies and content moderation to agents
- A2A protocol - Fetch standard Agent-to-Agent protocol cards
Prerequisites
- Praesidia account: https://praesidia.ai
- API key from Settings โ API Keys
- Configure in
~/.openclaw/openclaw.json:
{
"skills": {
"entries": {
"praesidia": {
"apiKey": "pk_live_your_key_here",
"env": {
"PRAESIDIA_API_URL": "https://api.praesidia.ai"
}
}
}
}
}
For local development, use http://localhost:3000 as the URL.
Quick Reference
1. Verify an Agent
User says: "Is agent chatbot-v2 safe?" / "Verify agent chatbot-v2"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/agents/chatbot-v2/agent-card",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Accept": "application/json"
}
})
Present to user:
- โ Agent name & description
- ๐ก๏ธ Trust score (0-100) and trust level
- โ Verification status (verified date)
- ๐ง Capabilities (what the agent can do)
- ๐ Compliance (SOC2, GDPR, etc.)
- ๐ Agent card URL
Example output:
โ
ChatBot V2 is verified and safe to use!
Trust Score: 92.5/100 (VERIFIED)
Status: ACTIVE
Capabilities: message:send, task:create, data:analyze
Compliance: SOC2, GDPR
Last verified: 2 days ago
Agent card: https://api.praesidia.ai/agents/chatbot-v2/agent-card
2. List Guardrails for an Agent
User says: "What guardrails are configured for my agent?" / "Show me security policies for chatbot-v2"
Your action:
// First, get the user's organization ID from their profile or context
// Then fetch guardrails
web_fetch({
url: "${PRAESIDIA_API_URL}/organizations/${orgId}/guardrails?agentId=${agentId}",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Accept": "application/json"
}
})
Present to user:
- List of guardrails with:
- Name and description
- Type (RULE, ML, LLM)
- Category (CONTENT, SECURITY, COMPLIANCE, etc.)
- Action (BLOCK, WARN, REDACT, REPLACE)
- Scope (INPUT, OUTPUT, BOTH)
- Enabled status
- Trigger count
Example output:
Found 3 guardrails for ChatBot V2:
1. PII Detection (ENABLED)
- Type: ML | Category: SECURITY
- Scope: BOTH (input & output)
- Action: REDACT sensitive data
- Triggered: 45 times
2. Toxic Language Filter (ENABLED)
- Type: RULE | Category: CONTENT
- Scope: BOTH
- Action: BLOCK toxic content
- Triggered: 12 times
3. Financial Advice Warning (ENABLED)
- Type: LLM | Category: COMPLIANCE
- Scope: OUTPUT only
- Action: WARN if detected
- Triggered: 3 times
3. Get Available Guardrail Templates
User says: "What guardrail templates are available?" / "Show me security templates"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/organizations/${orgId}/guardrails/templates",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Accept": "application/json"
}
})
Available Templates:
Content Moderation:
- TOXIC_LANGUAGE - Detect toxic/harmful language
- PROFANITY_FILTER - Filter profanity
- HATE_SPEECH - Detect hate speech
- VIOLENCE_DETECTION - Detect violent content
- ADULT_CONTENT - Filter adult content
Security:
- PII_DETECTION - Detect personally identifiable information
- CREDIT_CARD_DETECTION - Detect credit card numbers
- SSN_DETECTION - Detect social security numbers
- API_KEY_DETECTION - Detect leaked API keys
- PROMPT_INJECTION - Detect prompt injection attacks
- JAILBREAK_DETECTION - Detect jailbreak attempts
Compliance:
- FINANCIAL_ADVICE - Flag financial advice
- MEDICAL_ADVICE - Flag medical advice
- LEGAL_ADVICE - Flag legal advice
- GDPR_COMPLIANCE - Enforce GDPR rules
- HIPAA_COMPLIANCE - Enforce HIPAA rules
Brand Safety:
- COMPETITOR_MENTIONS - Detect competitor mentions
- POSITIVE_TONE - Ensure positive tone
- BRAND_VOICE - Maintain brand voice
- OFF_TOPIC_DETECTION - Detect off-topic responses
Accuracy:
- HALLUCINATION_DETECTION - Detect hallucinations
- FACT_CHECKING - Verify facts
- SOURCE_VALIDATION - Validate sources
- CONSISTENCY_CHECK - Check consistency
4. Apply a Guardrail to an Agent
User says: "Add PII detection to my chatbot" / "Apply toxic language filter to agent xyz"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/organizations/${orgId}/guardrails",
method: "POST",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Content-Type": "application/json"
},
body: JSON.stringify({
name: "PII Detection",
description: "Automatically detect and redact PII",
agentId: "${agentId}",
template: "PII_DETECTION",
type: "ML",
category: "SECURITY",
scope: "BOTH",
action: "REDACT",
severity: "HIGH",
isEnabled: true,
priority: 0
})
})
Guardrail Options:
Type:
- RULE - Simple regex/keyword matching (fast)
- ML - Machine learning model (balanced)
- LLM - LLM-powered validation (most accurate)
Category:
- CONTENT - Content moderation
- SECURITY - Security checks
- COMPLIANCE - Regulatory compliance
- BRAND - Brand safety
- ACCURACY - Accuracy checks
- CUSTOM - Custom rules
Scope:
- INPUT - Validate user input only
- OUTPUT - Validate agent output only
- BOTH - Validate both directions
Action:
- BLOCK - Block the request/response entirely
- WARN - Log warning but allow through
- REDACT - Mask the offending content
- REPLACE - Replace with alternative content
- RETRY - Retry with modified prompt
- ESCALATE - Escalate to human review
Severity:
- LOW, MEDIUM, HIGH, CRITICAL
5. Validate Content Against Guardrails
User says: "Check if this message passes guardrails: [content]"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/organizations/${orgId}/guardrails/validate",
method: "POST",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Content-Type": "application/json"
},
body: JSON.stringify({
content: "User's message here",
agentId: "${agentId}",
scope: "INPUT"
})
})
Response shows:
- Whether content passed or failed
- Which guardrails were triggered
- Suggested actions (block, redact, warn)
- Modified content (if redaction applied)
6. Discover Public Agents
User says: "Find public data analysis agents" / "Show me chatbot agents"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/agents/discovery?visibility=PUBLIC&search=data",
headers: { "Accept": "application/json" }
// Authorization optional for public agents (includes it for more results)
})
Filters available:
?visibility=PUBLIC- public marketplace agents?role=SERVER- agents that provide services?role=CLIENT- agents that consume services?status=ACTIVE- only active agents?search=keyword- search by name/description
Present to user:
- List of matching agents with:
- Name, description, agent ID
- Trust score and level
- Role (SERVER/CLIENT)
- Key capabilities
- Link to full card
Example output:
Found 2 public data analysis agents:
1. OpenData Analyzer (VERIFIED - 88.0/100)
- Capabilities: data:analyze, chart:generate, report:create
- Role: SERVER | Status: ACTIVE
- Card: https://api.praesidia.ai/agents/opendata-1/agent-card
2. CSV Processor (STANDARD - 70.0/100)
- Capabilities: file:parse, data:transform, export:json
- Role: SERVER | Status: ACTIVE
- Card: https://api.praesidia.ai/agents/csv-proc/agent-card
7. List User's Agents
User says: "Show my agents" / "List all my server agents"
Your action:
web_fetch({
url: "${PRAESIDIA_API_URL}/agents/discovery?role=SERVER",
headers: {
"Authorization": "Bearer ${PRAESIDIA_API_KEY}",
"Accept": "application/json"
}
})
This returns all agents the user has access to (their own + team/org agents).
Trust Levels Guide
Present trust information clearly to help users make decisions:
| Trust Score | Level | Meaning | Recommendation |
|---|---|---|---|
| 90-100 | VERIFIED | Fully vetted, compliant, verified identity | โ Safe to use |
| 70-89 | STANDARD | Good reputation, basic verification | โ Generally safe |
| 50-69 | LIMITED | Minimal verification | โ ๏ธ Use with caution |
| 0-49 | UNTRUSTED | Not verified or poor reputation | โ Not recommended |
Always show the trust score numerically (e.g., 92.5/100) and the level (e.g., VERIFIED).
Error Handling
| Error | Meaning | What to tell user |
|---|---|---|
| 401 Unauthorized | API key missing/invalid | "Check PRAESIDIA_API_KEY in ~/.openclaw/openclaw.json" |
| 403 Forbidden | No permission | "You don't have access to this agent" |
| 404 Not Found | Agent doesn't exist | "Agent not found. Check the agent ID" |
| 500 Server Error | Praesidia API issue | "Praesidia API temporarily unavailable. Try again" |
API Endpoints
GET /agents/:id/agent-card
Fetch detailed agent card with trust data.
Auth: Required for private/team/org agents, optional for public Returns: A2A agent card + Praesidia extensions (trust, compliance)
GET /agents/discovery
List/search agents with filters.
Auth: Optional (more results with auth)
Query params: role, status, visibility, search
Returns: Array of agent summaries with card URLs
Guardrails Best Practices
When helping users with guardrails:
- Start with templates - Use predefined templates before custom rules
- Layer security - Combine multiple guardrails (PII + Toxic + Compliance)
- Test before enabling - Use validate endpoint to test content first
- Monitor triggers - Check stats regularly to tune thresholds
- Scope appropriately - Use INPUT for user content, OUTPUT for agent responses
- Choose right action:
- BLOCK for critical security issues (PII, prompt injection)
- REDACT for sensitive data that can be masked
- WARN for compliance/brand issues that need logging
- ESCALATE for edge cases requiring human review
Best Practices
- Always verify before recommending - Check trust score before suggesting an agent
- Explain trust levels - Users may not know what "VERIFIED" means
- Filter by SERVER role - When users want agents to use/call
- Show compliance - Important for enterprise users (SOC2, GDPR)
- Present trust score numerically - 92.5/100 is clearer than just "VERIFIED"
- Layer guardrails - Combine security, content, and compliance guardrails
Common User Patterns
Pattern 1: Safety Check
User: "Is agent xyz safe to use?"
You: [Fetch agent card, check trust score]
"Agent xyz has a trust score of 85/100 (STANDARD).
It's verified for basic operations. What would you like to use it for?"
Pattern 2: Capability Discovery
User: "I need an agent that can analyze spreadsheets"
You: [Search discovery with visibility=PUBLIC&search=spreadsheet]
"I found 3 spreadsheet analysis agents. The highest rated is..."
Pattern 3: Fleet Management
User: "Show me all my agents that are inactive"
You: [Fetch discovery with status=INACTIVE]
"You have 2 inactive agents: [list with trust scores]"
Pattern 4: Apply Security
User: "I need to secure my chatbot against PII leaks"
You: [List available templates, recommend PII_DETECTION]
[Apply guardrail with REDACT action on BOTH scope]
"I've added PII Detection (ML-powered) to your chatbot.
It will automatically redact sensitive information in both
user inputs and bot responses."
Pattern 5: Compliance Check
User: "My agent handles healthcare data. What guardrails should I add?"
You: [Check if HIPAA compliance is required]
[Recommend HIPAA_COMPLIANCE + PII_DETECTION + AUDIT_LOGGING]
"For healthcare data, I recommend these guardrails:
1. HIPAA Compliance (BLOCK on violations)
2. PII Detection (REDACT)
3. Medical Advice Warning (WARN)
Would you like me to apply these?"
Environment Variables
PRAESIDIA_API_KEY(required) - Your API key from https://app.praesidia.aiPRAESIDIA_API_URL(optional) - Defaults tohttps://api.praesidia.ai- Production:
https://api.praesidia.ai - Local dev:
http://localhost:3000 - Custom: Your deployment URL
- Production:
Additional Resources
- Full setup guide: See README.md in this skill folder
- API documentation: https://app.praesidia.ai/docs/api
- A2A protocol: https://a2a-protocol.org
- Support: [email protected] or https://discord.gg/e9EwZfHS
Security & Privacy
- All production requests use HTTPS
- API keys stored in OpenClaw config (never exposed to users)
- Private/team/org agents require authentication
- Public agents accessible without auth
- Trust verification protects against malicious agents