This project has been set up with uv. You should be able to use
uv run workshop-{n}/[file].pyto see the code in action.
uv run workshop-1/post-generator.py --postTakes the company docs and generates a post using a free model from OpenRouter and posts it on Mastodon. Remove the --post flag if you do not want to publish the post.
uv run workshop-1/keyword-responder.py --postSearches 3 keywords based on Emanon (the company described in the business-docs folder) and uses structured outputs to generate responses to the top 5 posts.
Same as above, you can remove the --post flag if you do not want to publish the responses (highly recommended). The one part of the project that is untested because it is spamming other people's posts.
See workshop-2-hitloop/ folder for Telegram-based human approval workflow.
Use chatIDrobot to get chatID
Workshop 3 focuses on backend and cloud deployment using Google Cloud Platform (GCP) and Claude Code. It builds on Workshop 1 by adding SQLite database tracking and a FastAPI server.
- GCloud Account (cloud.google.com)
- Billing Setup - $300 Free Credits
- GCloud CLI installed (
brew install google-cloud-sdkon Mac)
uv run workshop-3/api.pyRuns a FastAPI server that provides REST endpoints to view posts, responses, and statistics stored in SQLite.
uv run workshop-3/post_generator_db.py --postExtends workshop-1's post generator - now saves posts to SQLite database.
uv run workshop-3/keyword_responder_db.py --postExtends workshop-1's keyword responder - now saves responses to SQLite database.
uv run workshop-3/post_generator_hitl_db.py --approve --postComplete workflow: generates post → sends to Telegram for approval → posts to Mastodon if approved → saves to database.
See workshop-3/prompts.md for full details. Key prompts:
which gcloud account am I logged into and which projects does it have?can we deploy a e2 vm to that project? Please turn on any necessary apiswe have this virtual machine in gcloud. you have the gcloud cli to ssh into that machine. We need to install sqlitelet's also deploy a fastapi server that uses that database and make sure the fast api is setup properly as a service on linux
Workshop 4 adds RAG-powered document monitoring with a unified FastAPI backend. All functionality is exposed via REST endpoints that a frontend can control.
- Document Watcher: Monitors
business-docs/for changes, auto-generates posts - RAG Search: Hybrid BM25 + semantic search over all content
- Comment Listener: Auto-replies to Mastodon comments using RAG context
- Local Embeddings: Uses MiniLM-L6-v2 via ONNX (no API calls needed)
cd workshop-4
uv run uvicorn api:app --reloadVisit http://localhost:8000/docs for Swagger UI with all endpoints.
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/stats |
GET | Posts, responses, embeddings, comments stats |
/posts |
GET | List all generated posts |
/responses |
GET | List all keyword responses |
/embeddings/init |
POST | Initialize all embeddings (business docs, posts, responses) |
/embeddings/refresh |
POST | Refresh only changed embeddings |
/embeddings/stats |
GET | Embedding counts by type |
/search |
POST | Hybrid RAG search with configurable weights |
/watcher/start |
POST | Start document watcher (background) |
/watcher/stop |
POST | Stop document watcher |
/watcher/status |
GET | Watcher state and stats |
/watcher/check |
POST | One-time check for doc changes |
/watcher/reset |
POST | Reset document tracking state |
/comments/start |
POST | Start comment listener (background) |
/comments/stop |
POST | Stop comment listener |
/comments/status |
GET | Comment listener state |
/comments/replies |
GET | All generated comment replies |
curl -X POST http://localhost:8000/search \
-H "Content-Type: application/json" \
-d '{"query": "AI consulting services", "top_k": 5}'- Create systemd service on your VM:
sudo nano /etc/systemd/system/social-media-api.service[Unit]
Description=Social Media Agent API
After=network.target
[Service]
Type=simple
User=your-username
WorkingDirectory=/home/your-username/6s093-social-media-agent/workshop-4
Environment="PATH=/home/your-username/6s093-social-media-agent/.venv/bin"
EnvironmentFile=/home/your-username/6s093-social-media-agent/.env
ExecStart=/home/your-username/6s093-social-media-agent/.venv/bin/uvicorn api:app --host 0.0.0.0 --port 8000
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target- Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable social-media-api
sudo systemctl start social-media-api
sudo systemctl status social-media-apiThe individual modules can still be run directly:
cd workshop-4
# Doc watcher (event-driven)
uv run python doc_watcher.py --watch --post
# Initialize embeddings
uv run python embeddings.py --init
# RAG search test
uv run python rag.py "AI consulting"
# Comment listener
uv run python comment_listener.py --postworkshop-4/
├── api.py # FastAPI backend (main entry point)
├── database.py # SQLite with FTS5 for BM25 search
├── embeddings.py # Local MiniLM-L6-v2 via fastembed
├── rag.py # Hybrid search (BM25 + cosine similarity)
├── doc_watcher.py # File monitoring with watchdog
├── comment_listener.py # Mastodon comment auto-replies
└── social_media.db # SQLite database
Please put the content for your workshop in a folder called workshop-[your workshop number]
Please build upon the previous workshop's code, so we have one cohesive project by the last workshop.
I have included reference docs for Emanon (renamed version of Vector Lab) in this repo as well in the business-docs folder that you can use as well.
Also please READ and TEST your code so that you know what it is doing and actually works, this will be used as ground truth for the other TAs, so any issues could derail things very quickly.
I also have kept a file highlighting the ALL prompts I used to get to my code for reference as well, I encourage you to do the same.
The expected code for each day's workshop can be found in its respective folder.
Each folder only contains the code needed for that workshop and does not include the previous day's code, since the students code will most likely look very different.
They are there so that you can check to see what working function calls and env setup for the different services looks like so that you can diagnose when there is an issue with the student's implementation.