Skill Evaluator
Evaluate skills across 25 criteria using a hybrid automated + manual approach.
Quick Start
1. Run automated checks
python3 scripts/eval-skill.py /path/to/skill
python3 scripts/eval-skill.py /path/to/skill --json # machine-readable
python3 scripts/eval-skill.py /path/to/skill --verbose # show all details
Checks: file structure, frontmatter, description quality, script syntax, dependency audit, credential scan, env var documentation.
2. Manual assessment
Use the rubric at references/rubric.md to score 25 criteria across 8 categories (0โ4 each, 100 total). Each criterion has concrete descriptions per score level.
3. Write the evaluation
Copy assets/EVAL-TEMPLATE.md to the skill directory as EVAL.md. Fill in automated results + manual scores.
Evaluation Process
- Run
eval-skill.pyโ get the automated structural score - Read the skill's SKILL.md โ understand what it does
- Read/skim the scripts โ assess code quality, error handling, testability
- Score each manual criterion using references/rubric.md โ concrete criteria per level
- Prioritize findings as P0 (blocks publishing) / P1 (should fix) / P2 (nice to have)
- Write EVAL.md in the skill directory with scores + findings
Categories (8 categories, 25 criteria)
| # | Category | Source Framework | Criteria |
|---|---|---|---|
| 1 | Functional Suitability | ISO 25010 | Completeness, Correctness, Appropriateness |
| 2 | Reliability | ISO 25010 | Fault Tolerance, Error Reporting, Recoverability |
| 3 | Performance / Context | ISO 25010 + Agent | Token Cost, Execution Efficiency |
| 4 | Usability โ AI Agent | Shneiderman, Gerhardt-Powals | Learnability, Consistency, Feedback, Error Prevention |
| 5 | Usability โ Human | Tognazzini, Norman | Discoverability, Forgiveness |
| 6 | Security | ISO 25010 + OpenSSF | Credentials, Input Validation, Data Safety |
| 7 | Maintainability | ISO 25010 | Modularity, Modifiability, Testability |
| 8 | Agent-Specific | Novel | Trigger Precision, Progressive Disclosure, Composability, Idempotency, Escape Hatches |
Interpreting Scores
| Range | Verdict | Action |
|---|---|---|
| 90โ100 | Excellent | Publish confidently |
| 80โ89 | Good | Publishable, note known issues |
| 70โ79 | Acceptable | Fix P0s before publishing |
| 60โ69 | Needs Work | Fix P0+P1 before publishing |
| <60 | Not Ready | Significant rework needed |
Deeper Security Scanning
This evaluator covers security basics (credentials, input validation, data safety) but for thorough security audits of skills under development, consider SkillLens (npx skilllens scan <path>). It checks for exfiltration, code execution, persistence, privilege bypass, and prompt injection โ complementary to the quality focus here.
Dependencies
- Python 3.6+ (for eval-skill.py)
- PyYAML (
pip install pyyaml) โ for frontmatter parsing in automated checks