n8n Workflow Management
Comprehensive workflow automation management for n8n platform with creation, testing, execution monitoring, and performance optimization capabilities.
โ ๏ธ CRITICAL: Workflow Creation Rules
When creating n8n workflows, ALWAYS:
- โ Generate COMPLETE workflows with all functional nodes
- โ Include actual HTTP Request nodes for API calls (ImageFX, Gemini, Veo, Suno, etc.)
- โ Add Code nodes for data transformation and logic
- โ Create proper connections between all nodes
- โ Use real node types (n8n-nodes-base.httpRequest, n8n-nodes-base.code, n8n-nodes-base.set)
NEVER:
- โ Create "Setup Instructions" placeholder nodes
- โ Generate workflows with only TODO comments
- โ Make incomplete workflows requiring manual node addition
- โ Use text-only nodes as substitutes for real functionality
Example GOOD workflow:
Manual Trigger โ Set Config โ HTTP Request (API call) โ Code (parse) โ Response
Example BAD workflow:
Manual Trigger โ Code ("Add HTTP nodes here, configure APIs...")
Always build the complete, functional workflow with all necessary nodes configured and connected.
Setup
Required environment variables:
N8N_API_KEYโ Your n8n API key (Settings โ API in the n8n UI)N8N_BASE_URLโ Your n8n instance URL
Configure credentials via OpenClaw settings:
Add to ~/.config/openclaw/settings.json:
{
"skills": {
"n8n": {
"env": {
"N8N_API_KEY": "your-api-key-here",
"N8N_BASE_URL": "your-n8n-url-here"
}
}
}
}
Or set per-session (do not persist secrets in shell rc files):
export N8N_API_KEY="your-api-key-here"
export N8N_BASE_URL="your-n8n-url-here"
Verify connection:
python3 scripts/n8n_api.py list-workflows --pretty
Security note: Never store API keys in plaintext shell config files (
~/.bashrc,~/.zshrc). Use the OpenClaw settings file or a secure secret manager.
Quick Reference
Workflow Management
List Workflows
python3 scripts/n8n_api.py list-workflows --pretty
python3 scripts/n8n_api.py list-workflows --active true --pretty
Get Workflow Details
python3 scripts/n8n_api.py get-workflow --id <workflow-id> --pretty
Create Workflows
# From JSON file
python3 scripts/n8n_api.py create --from-file workflow.json
Activate/Deactivate
python3 scripts/n8n_api.py activate --id <workflow-id>
python3 scripts/n8n_api.py deactivate --id <workflow-id>
Testing & Validation
Validate Workflow Structure
# Validate existing workflow
python3 scripts/n8n_tester.py validate --id <workflow-id>
# Validate from file
python3 scripts/n8n_tester.py validate --file workflow.json --pretty
# Generate validation report
python3 scripts/n8n_tester.py report --id <workflow-id>
Dry Run Testing
# Test with data
python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data '{"email": "[email protected]"}'
# Test with data file
python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test-data.json
# Full test report (validation + dry run)
python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test.json --report
Test Suite
# Run multiple test cases
python3 scripts/n8n_tester.py test-suite --id <workflow-id> --test-suite test-cases.json
Execution Monitoring
List Executions
# Recent executions (all workflows)
python3 scripts/n8n_api.py list-executions --limit 10 --pretty
# Specific workflow executions
python3 scripts/n8n_api.py list-executions --id <workflow-id> --limit 20 --pretty
Get Execution Details
python3 scripts/n8n_api.py get-execution --id <execution-id> --pretty
Manual Execution
# Trigger workflow
python3 scripts/n8n_api.py execute --id <workflow-id>
# Execute with data
python3 scripts/n8n_api.py execute --id <workflow-id> --data '{"key": "value"}'
Performance Optimization
Analyze Performance
# Full performance analysis
python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --pretty
# Analyze specific period
python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --days 30 --pretty
Get Optimization Suggestions
# Priority-ranked suggestions
python3 scripts/n8n_optimizer.py suggest --id <workflow-id> --pretty
Generate Optimization Report
# Human-readable report with metrics, bottlenecks, and suggestions
python3 scripts/n8n_optimizer.py report --id <workflow-id>
Get Workflow Statistics
# Execution statistics
python3 scripts/n8n_api.py stats --id <workflow-id> --days 7 --pretty
Python API
Basic Usage
from scripts.n8n_api import N8nClient
client = N8nClient()
# List workflows
workflows = client.list_workflows(active=True)
# Get workflow
workflow = client.get_workflow('workflow-id')
# Create workflow
new_workflow = client.create_workflow({
'name': 'My Workflow',
'nodes': [...],
'connections': {...}
})
# Activate/deactivate
client.activate_workflow('workflow-id')
client.deactivate_workflow('workflow-id')
# Executions
executions = client.list_executions(workflow_id='workflow-id', limit=10)
execution = client.get_execution('execution-id')
# Execute workflow
result = client.execute_workflow('workflow-id', data={'key': 'value'})
Validation & Testing
from scripts.n8n_api import N8nClient
from scripts.n8n_tester import WorkflowTester
client = N8nClient()
tester = WorkflowTester(client)
# Validate workflow
validation = tester.validate_workflow(workflow_id='123')
print(f"Valid: {validation['valid']}")
print(f"Errors: {validation['errors']}")
print(f"Warnings: {validation['warnings']}")
# Dry run
result = tester.dry_run(
workflow_id='123',
test_data={'email': '[email protected]'}
)
print(f"Status: {result['status']}")
# Test suite
test_cases = [
{'name': 'Test 1', 'input': {...}, 'expected': {...}},
{'name': 'Test 2', 'input': {...}, 'expected': {...}}
]
results = tester.test_suite('123', test_cases)
print(f"Passed: {results['passed']}/{results['total_tests']}")
# Generate report
report = tester.generate_test_report(validation, result)
print(report)
Performance Optimization
from scripts.n8n_optimizer import WorkflowOptimizer
optimizer = WorkflowOptimizer()
# Analyze performance
analysis = optimizer.analyze_performance('workflow-id', days=7)
print(f"Performance Score: {analysis['performance_score']}/100")
print(f"Health: {analysis['execution_metrics']['health']}")
# Get suggestions
suggestions = optimizer.suggest_optimizations('workflow-id')
print(f"Priority Actions: {len(suggestions['priority_actions'])}")
print(f"Quick Wins: {len(suggestions['quick_wins'])}")
# Generate report
report = optimizer.generate_optimization_report(analysis)
print(report)
Common Workflows
1. Validate and Test Workflow
# Validate workflow structure
python3 scripts/n8n_tester.py validate --id <workflow-id> --pretty
# Test with sample data
python3 scripts/n8n_tester.py dry-run --id <workflow-id> \
--data '{"email": "[email protected]", "name": "Test User"}'
# If tests pass, activate
python3 scripts/n8n_api.py activate --id <workflow-id>
2. Debug Failed Workflow
# Check recent executions
python3 scripts/n8n_api.py list-executions --id <workflow-id> --limit 10 --pretty
# Get specific execution details
python3 scripts/n8n_api.py get-execution --id <execution-id> --pretty
# Validate workflow structure
python3 scripts/n8n_tester.py validate --id <workflow-id>
# Generate test report
python3 scripts/n8n_tester.py report --id <workflow-id>
# Check for optimization issues
python3 scripts/n8n_optimizer.py report --id <workflow-id>
3. Optimize Workflow Performance
# Analyze current performance
python3 scripts/n8n_optimizer.py analyze --id <workflow-id> --days 30 --pretty
# Get actionable suggestions
python3 scripts/n8n_optimizer.py suggest --id <workflow-id> --pretty
# Generate comprehensive report
python3 scripts/n8n_optimizer.py report --id <workflow-id>
# Review execution statistics
python3 scripts/n8n_api.py stats --id <workflow-id> --days 30 --pretty
# Test optimizations with dry run
python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test-data.json
4. Monitor Workflow Health
# Check active workflows
python3 scripts/n8n_api.py list-workflows --active true --pretty
# Review recent execution status
python3 scripts/n8n_api.py list-executions --limit 20 --pretty
# Get statistics for each critical workflow
python3 scripts/n8n_api.py stats --id <workflow-id> --pretty
# Generate health reports
python3 scripts/n8n_optimizer.py report --id <workflow-id>
Validation Checks
The testing module performs comprehensive validation:
Structure Validation
- โ Required fields present (nodes, connections)
- โ All nodes have names and types
- โ Connection targets exist
- โ No disconnected nodes (warning)
Configuration Validation
- โ Nodes requiring credentials are configured
- โ Required parameters are set
- โ HTTP nodes have URLs
- โ Webhook nodes have paths
- โ Email nodes have content
Flow Validation
- โ Workflow has trigger nodes
- โ Proper execution flow
- โ No circular dependencies
- โ End nodes identified
Optimization Analysis
The optimizer analyzes multiple dimensions:
Execution Metrics
- Total executions
- Success/failure rates
- Health status (excellent/good/fair/poor)
- Error patterns
Performance Metrics
- Node count and complexity
- Connection patterns
- Expensive operations (API calls, database queries)
- Parallel execution opportunities
Bottleneck Detection
- Sequential expensive operations
- High failure rates
- Missing error handling
- Rate limit issues
Optimization Opportunities
- Parallel Execution: Identify nodes that can run concurrently
- Caching: Suggest caching for repeated API calls
- Batch Processing: Recommend batching for large datasets
- Error Handling: Add error recovery mechanisms
- Complexity Reduction: Split complex workflows
- Timeout Settings: Configure execution limits
Performance Scoring
Workflows receive a performance score (0-100) based on:
- Success Rate: Higher is better (50% weight)
- Complexity: Lower is better (30% weight)
- Bottlenecks: Fewer is better (critical: -20, high: -10, medium: -5)
- Optimizations: Implemented best practices (+5 each)
Score interpretation:
- 90-100: Excellent - Well-optimized
- 70-89: Good - Minor improvements possible
- 50-69: Fair - Optimization recommended
- 0-49: Poor - Significant issues
Best Practices
Development
- Plan Structure: Design workflow nodes and connections before building
- Validate First: Always validate before deployment
- Test Thoroughly: Use dry-run with multiple test cases
- Error Handling: Add error nodes for reliability
- Documentation: Comment complex logic in Code nodes
Testing
- Sample Data: Create realistic test data files
- Edge Cases: Test boundary conditions and errors
- Incremental: Test each node addition
- Regression: Retest after changes
- Production-like: Use staging environment that mirrors production
Deployment
- Inactive First: Deploy workflows in inactive state
- Gradual Rollout: Test with limited traffic initially
- Monitor Closely: Watch first executions carefully
- Quick Rollback: Be ready to deactivate if issues arise
- Document Changes: Keep changelog of modifications
Optimization
- Baseline Metrics: Capture performance before changes
- One Change at a Time: Isolate optimization impacts
- Measure Results: Compare before/after metrics
- Regular Reviews: Schedule monthly optimization reviews
- Cost Awareness: Monitor API usage and execution costs
Maintenance
- Health Checks: Weekly execution statistics review
- Error Analysis: Investigate failure patterns
- Performance Monitoring: Track execution times
- Credential Rotation: Update credentials regularly
- Cleanup: Archive or delete unused workflows
Troubleshooting
Authentication Error
Error: N8N_API_KEY not found in environment
Solution: Set environment variable:
export N8N_API_KEY="your-api-key"
Connection Error
Error: HTTP 401: Unauthorized
Solution:
- Verify API key is correct
- Check N8N_BASE_URL is set correctly
- Confirm API access is enabled in n8n
Validation Errors
Validation failed: Node missing 'name' field
Solution: Check workflow JSON structure, ensure all required fields present
Execution Timeout
Status: timeout - Execution did not complete
Solution:
- Check workflow for infinite loops
- Reduce dataset size for testing
- Optimize expensive operations
- Set execution timeout in workflow settings
Rate Limiting
Error: HTTP 429: Too Many Requests
Solution:
- Add Wait nodes between API calls
- Implement exponential backoff
- Use batch processing
- Check API rate limits
Missing Credentials
Warning: Node 'HTTP_Request' may require credentials
Solution:
- Configure credentials in n8n UI
- Assign credentials to node
- Test connection before activating
File Structure
~/clawd/skills/n8n/
โโโ SKILL.md # This file
โโโ scripts/
โ โโโ n8n_api.py # Core API client (extended)
โ โโโ n8n_tester.py # Testing & validation
โ โโโ n8n_optimizer.py # Performance optimization
โโโ references/
โโโ api.md # n8n API reference
API Reference
For detailed n8n REST API documentation, see references/api.md or visit: https://docs.n8n.io/api/
Support
Documentation:
- n8n Official Docs: https://docs.n8n.io
- n8n Community Forum: https://community.n8n.io
- n8n API Reference: https://docs.n8n.io/api/
Debugging:
- Use validation:
python3 scripts/n8n_tester.py validate --id <workflow-id> - Check execution logs:
python3 scripts/n8n_api.py get-execution --id <execution-id> - Review optimization report:
python3 scripts/n8n_optimizer.py report --id <workflow-id> - Test with dry-run:
python3 scripts/n8n_tester.py dry-run --id <workflow-id> --data-file test.json