x-timeline-digest
Overview
This skill uses bird to read X/Twitter timelines and build a high-signal digest. Sources:
- For You timeline
- Following timeline What it does:
- Fetch recent tweets
- Filter incrementally (avoid reprocessing)
- Deduplicate (ID + near-duplicate text)
- Rank and trim
- Generate a Chinese digest
- Output a structured payload
Delivery (Telegram, email, etc.) is NOT handled here. Upstream OpenClaw workflows decide how to notify users.
Configuration
All config is read from: skills.entries["x-timeline-digest"].config
Config fields
| Name | Type | Default | Description |
|---|---|---|---|
| intervalHours | number | 6 | Interval window in hours |
| fetchLimitForYou | number | 100 | Tweets fetched from For You |
| fetchLimitFollowing | number | 60 | Tweets fetched from Following |
| maxItemsPerDigest | number | 25 | Max tweets in one digest |
| similarityThreshold | number | 0.9 | Near-duplicate similarity threshold |
| statePath | string | ~/.openclaw/state/x-timeline-digest.json | State file path |
Dependencies
- bird must be installed and available in PATH
- bird must already be authenticated (cookie login)
- Read-only usage
Usage
1. Basic (Raw JSON)
Run the digest generator to get a clean, deduplicated JSON payload:
node skills/x-timeline-digest/digest.js
2. Intelligent Digest (Recommended)
To generate the "Smart Brief" (Categorized, Summarized, Denoised):
- Run the script:
node skills/x-timeline-digest/digest.js > digest.json - Read the prompt template:
read skills/x-timeline-digest/PROMPT.md - Send the prompt to your LLM, injecting the content of
digest.jsonwhere{{JSON_DATA}}is.
Note: The script automatically applies heuristic filtering (removes "gm", ads, short spam) before outputting JSON.
Bird Commands Used
For You timeline:
bird home -n --json
Following timeline:
bird home --following -n --json
State Management
State is persisted to statePath.
State structure
{ "lastRunAt": "2026-02-01T00:00:00+08:00", "sentTweetIds": { "123456789": "2026-02-01T00:00:00+08:00" } }
Rules
- Tweets already in sentTweetIds must not be included again
- After a successful run:
- Update lastRunAt
- Add pushed tweet IDs to sentTweetIds
- Keep IDs for at least 30 days
Processing Pipeline
- Fetch from For You and Following
- Incremental filter using lastRunAt
- Hard deduplication by tweet id
- Near-duplicate merge using text similarity
- Rank and trim to maxItemsPerDigest
- Summarize into Chinese digest
Output
The skill returns one JSON object: { "window": { "start": "2026-02-01T00:00:00+08:00", "end": "2026-02-01T06:00:00+08:00", "intervalHours": 6 }, "counts": { "forYouFetched": 100, "followingFetched": 60, "afterIncremental": 34, "afterDedup": 26, "final": 20 }, "digestText": "中文摘要内容", "items": [ { "id": "123456", "author": "@handle", "createdAt": "2026-02-01T02:15:00+08:00", "text": "tweet text", "url": "https://x.com/handle/status/123456", "sources": ["following"] } ] }