How I automated documentation updates with Cursor and GitHub Actions
Documentation almost always drifts. You ship a new feature, the underlying code changes, but the text explaining it just sits there, frozen in time.
I ran into this exact issue recently while optimizing my blog for AI search. That project meant adding two new index files to my repository: public/llms.txt and public/llms-full.txt. They act as lightweight, machine-readable directories for the site. But because they live right in the repo alongside the code, they go stale just as fast as any other documentation file.
Every time I published a new post, I’d inevitably forget to update them. Eventually, I got tired of the chore and decided to automate the whole thing using Cursor Agent and GitHub Actions.
The problem
Creating the initial version of those files was actually pretty straightforward. The real headache was keeping them accurate over time.
Whenever I wrote a new post, my mental checklist looked something like this:
- skim through the new article again
- write a punchy, one-sentence entry for
llms.txt - draft a more detailed summary for
llms-full.txt - double-check that the titles, slugs, and descriptions perfectly matched the actual content
It’s tedious, repetitive work. But honestly, that’s exactly where an AI agent shines—as long as you keep the prompt tightly scoped.
For context, here’s the specific format I needed to maintain:
### AI & Developer Tools
- [Post title](https://example.com/blog/post-slug): One-sentence description.The workflow
I double-checked this setup on March 14, 2026, making sure the workflow still uses the exact install path that works reliably in GitHub Actions. The important prompt change was scoping the agent to the post files changed in the current run instead of letting it reread the whole blog and normalize unrelated entries.
name: Update AI Index Files
on: push: branches: - master paths: - 'src/content/posts/**/*.md' workflow_dispatch:
permissions: contents: write
jobs: update-ai-index: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0
- name: Install Cursor CLI run: | curl https://cursor.com/install -fsS | bash echo "$HOME/.cursor/bin" >> $GITHUB_PATH
- name: Configure git run: | git config user.name "Cursor Agent" git config user.email "cursoragent@cursor.com"
- name: Update AI index files env: MODEL: sonnet-4.5 CURSOR_API_KEY: ${{ secrets.CURSOR_API_KEY }} run: | agent -p "You are updating AI index files for a blog.
# Context A blog post was added, modified, or deleted in src/content/posts/.
# Files to Update 1. public/llms.txt - Concise AI index with post titles and descriptions 2. public/llms-full.txt - Comprehensive AI index with full article summaries
# Scope First identify which post files changed in this run. - For push events, use the git range `${{ github.event.before }}`..`${{ github.sha }}` - If `${{ github.event.before }}` is empty or all zeros, use `HEAD^..HEAD` - Only inspect changed markdown files inside src/content/posts/ - Do not read all posts unless you absolutely need a single adjacent entry to place a changed item in the correct sorted position
# Instructions 1. Identify the changed post files for this run from git diff. 2. Read only: - the changed post files that still exist - public/llms.txt - public/llms-full.txt 3. Use each changed post's frontmatter as the source of truth for title, description, category, tags, slug, and published date (`pubDate`). 4. Update only the matching entries or article blocks for the changed posts: - Add entries for newly added posts - Remove entries for deleted posts - Update entries only when the changed post's title, description, slug, category, tags, pubDate, or article meaning changed materially 5. If a changed post does not require an AI index update, make no edit for that post. 6. Preserve every unrelated entry and article block exactly as written. Do not normalize wording, headings, bullet style, or formatting across existing posts. 7. Keep the author info, navigation, and site information sections exactly unchanged. 8. For existing posts, edit in place rather than regenerating nearby content. 9. In public/llms.txt, move only the affected entry if a changed post's category or pubDate requires a different sorted position. 10. In public/llms-full.txt, move only the affected article block if a changed post's pubDate requires a different sorted position. 11. Maintain the current format and structure already used in each file. 12. Only modify public/llms.txt and public/llms-full.txt. 13. Do not modify any other files.
# Format for llms.txt entries - [Post Title](https://theodoroskokosioulis.com/blog/slug): Brief description.
# Format for llms-full.txt sections - For existing posts, preserve the current section structure of that post's block. - For brand new posts, use: ## Article: Post Title **URL:** https://theodoroskokosioulis.com/blog/slug **Published:** Month Day, Year from the post frontmatter pubDate **Category:** Category Name **Tags:** Tag1, Tag2 ### Summary Description from frontmatter. ### Key Points - Extract 3-5 main takeaways from the article content
# Important - Do not rewrite unrelated posts just to make the file more uniform - Do not touch a post block or list item if that post did not change in this run - Preserve the author info, site information, and navigation sections exactly " --force --model "$MODEL" --output-format=text
- name: Check for changes id: git-check run: | git add public/llms.txt public/llms-full.txt if git diff --staged --quiet; then echo "changes=false" >> $GITHUB_OUTPUT else echo "changes=true" >> $GITHUB_OUTPUT fi
- name: Commit and push changes if: steps.git-check.outputs.changes == 'true' run: | git commit -m "chore: update AI index files for blog changes" git pushHow it works
The workflow kicks off in two specific scenarios:
- Whenever I push changes to
masterthat affect files insidesrc/content/posts/. - If I manually trigger it straight from the GitHub Actions UI.
Once it starts, the agent first identifies the post files changed in the current git range. It reads only those posts plus the two index files, using the frontmatter as the absolute source of truth. It then updates only the matching list items or article blocks. If everything is already up to date, the workflow just quietly exits without cluttering the history with an empty commit.
Setup
You really only need to configure one secret in GitHub Actions for this to work:
CURSOR_API_KEY
Just drop that into your repository settings under Actions secrets. Once that’s done, the workflow runs entirely on its own—no extra manual steps required.
Key details
Keep the prompt narrow and scoped. This whole setup works reliably because the task is incredibly explicit: identify changed posts, update only their matching entries, preserve the existing structure, and touch absolutely nothing else.
Use path filters and git diff scope. By restricting the workflow to only run when post files actually change, and narrowing the agent to the files changed in the current run, you keep the automation focused and avoid broad rewrites.
Check for changes before committing. Adding that git diff --staged --quiet step is a lifesaver for keeping your git history clean and avoiding useless empty commits.
Treat model names as operational detail, not article content. Right now, this workflow uses sonnet-4.5 simply because that’s the model currently working best in my repo. If you’re copying this over to your own project, you’ll probably want to double-check which models are supported and adjust accordingly.
Where this pattern fits
You can adapt this approach for pretty much any situation where you need to maintain text that’s derived from other source material. A few examples that come to mind:
- changelog entries that summarize a batch of recent commits
- dependency tables listing exactly what’s currently installed
- API documentation generated directly from code comments
- internal indexes tracking a specific, constrained set of files
The underlying idea is always the same. If a job is specific enough to describe clearly, but repetitive enough that you hate doing it manually, it’s a perfect candidate for this kind of automation.
The result
These days, whenever I publish or edit a post, those repo-maintained indexes stay perfectly in sync without me having to lift a finger.
And honestly, that’s the real win here. It’s just one less annoying, repetitive task I have to remember to do—and one less thing to review when I’m trying to ship something.