This repository contains my evolving Windsurf rules with automatic version tracking. The README dynamically displays all versions from newest to oldest.
The Windsurf rules that I work with have to be evolving given that the tool itself evolves so quickly. This repository tracks all versions of my rules, with the latest version always available in latest.md
and historical versions archived for reference.
These rules are designed to be used with Windsurf's memory system to provide consistent context and behavior across sessions.
The main rules file should be placed at: ~/.codeium/windsurf/memories/global_rules.md
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
Rules constructed on 2025-08-10 00:56:03 using intelligent LLM organization Organization strategy: Organized content in a logical progression from foundational context to specific operational details. Grouped blocks into 4 main sections: Core Context, Technology & Infrastructure, Operational Guidelines, and Workflow Management. This structure ensures the AI first understands who it's working with and their environment, then learns technical preferences, followed by operational rules and specific workflow details.
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
Rules constructed on 2025-08-10 00:56:03 using intelligent LLM organization Organization strategy: Organized content in a logical progression from foundational context to specific operational details. Grouped blocks into 4 main sections: Core Context, Technology & Infrastructure, Operational Guidelines, and Workflow Management. This structure ensures the AI first understands who it's working with and their environment, then learns technical preferences, followed by operational rules and specific workflow details.
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
These instructions guide the specific LLM that you should choose for AI projects.
Unless there is compelling reason to use a local LLM, use a cloud one.
For local - ollama is available. See what models are there. If you think one is missing that would improve inference, download it.
For cost-efficient workloads (high reasoning or complex capabilities not needed) use GPT5.1-mini from OpenAI. Use OpenRouter for access.
Use Sonnet 4 or Gemini 2.5.
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
Rules constructed on 2025-08-10 15:29:10 using intelligent LLM organization Organization strategy: Fallback to original numerical order due to LLM error
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
These instructions guide the specific LLM that you should choose for AI projects.
Unless there is compelling reason to use a local LLM, use a cloud one.
For local - ollama is available. See what models are there. If you think one is missing that would improve inference, download it.
For cost-efficient workloads (high reasoning or complex capabilities not needed) use GPT5.1-mini from OpenAI. Use OpenRouter for access.
Use Sonnet 4 or Gemini 2.5.
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
Rules constructed on 2025-08-10 15:31:00 using intelligent LLM organization Organization strategy: Fallback to original numerical order due to LLM error
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
You will frequently be required to use LLMs to achieve various obejctives. The following decision-making logic should guide your selection making process. Use IT in place of your own reasoning. But it can be overridden by explicit instruction:
llm_selection_tree:
# Primary decision: Cloud vs Local
deployment_preference: "cloud" # Default to cloud unless compelling local reason
# Cloud model selection logic
cloud_selection:
# Task complexity assessment
task_categories:
cost_effective:
description: "Simple instructions, basic text processing, routine tasks"
primary_model: "openai/gpt-5.1-mini"
fallback_models:
- "openai/gpt-4.1-mini" # Only if 5.1-mini insufficient for cost optimization
provider: "openrouter"
deep_reasoning:
description: "Complex problem-solving, advanced reasoning, sophisticated language processing"
primary_models:
- "anthropic/claude-3.5-sonnet" # Prefer Claude for reasoning
- "google/gemini-2.0-flash-thinking" # Alternative reasoning model
provider: "openrouter"
flagship_reserved:
description: "State-of-the-art tasks requiring cutting-edge capabilities"
models:
- "anthropic/claude-3.5-sonnet"
- "google/gemini-2.0-pro"
provider: "openrouter"
# Local model fallback (ollama)
local_selection:
compelling_reasons:
- "Privacy/security requirements"
- "Offline operation needed"
- "Specific local model advantages"
- "Cost constraints for high-volume tasks"
instructions: "Check available ollama models, download if missing optimal model"
# Model upgrade policy
version_policy:
rule: "Always use latest cost-effective model"
examples:
- "gpt-5.1-mini replaces gpt-4.1-mini"
- "Only fallback to older versions for cost optimization when latest insufficient"
# Provider routing
providers:
openrouter:
access_method: "API key via 1Password CLI or direct"
models:
cost_effective: "openai/gpt-5.1-mini"
reasoning: "anthropic/claude-3.5-sonnet"
alternative_reasoning: "google/gemini-2.0-flash-thinking"
ollama:
access_method: "Local installation"
check_command: "ollama list"
download_command: "ollama pull <model>"
# MCP Handling
## Configuration Location
MCPs are stored here: `/home/daniel/.codeium/windsurf/mcp_config.json`
## Best Practices
### Tool Limit Management
- Windsurf has a limit of **100 active tools**
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
### Priority Hierarchy
When you could achieve a task through multiple methods:
1. **MCP** (preferred)
2. SSH into server
3. Direct CLI invocation
Always favor using MCPs when available.
### Clarification Protocol
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
## LAN IP Map
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
| IP Address | Hostname | Description |
| ---------- | ---------------- | ----------------------------------- |
| 10.0.0.1 | `opnsense` | Gateway / Router |
| 10.0.0.2 | `proxmox-host` | Proxmox (Ubuntu VM & HA containers) |
| 10.0.0.3 | `home-assistant` | Home Assistant OS |
| 10.0.0.4 | `home-server` | Ubuntu VM (core services host) |
| 10.0.0.50 | `synology` | Synology NAS (DS920+) |
## Network Context
- **Primary Development**: Home LAN (10.0.0.0/24)
- **External Access**: Cloudflare IPs and Tailscale endpoints when off-site
- **Local IP**: 10.0.0.6 (`enp6s0` interface)
# Prioritisation Instructions
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
## Scrape before using headless browsers
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
# Project Preferences
## AI Projects
Daniel frequently works on AI projects with these preferences:
### LLM Backends
- **Primary**: OpenRouter (preferred for cloud LLM access)
- **Fallback**: OpenAI (when OpenRouter adds unnecessary complexity)
- **Local**: Ollama is installed
- **Local Model**: Favor Llama 3.2 for general-purpose local tasks
### API Management
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
## Core Technology Stack
### AI & LLM Services
- **OpenRouter** - Preferred backend for cloud LLM access
- **OpenAI** - Fallback when OpenRouter adds complexity
- **Ollama** - Local LLM deployment (Llama 3.2 preferred)
- **Hugging Face** - Creating copies of datasets
### Cloud Services
- **Wasabi** - Cloud storage solution
- **Netlify** - Web deployment and hosting
- **Cloudflare** - DNS management and tunneling
### Repository Philosophy
- **Default to private** - Keep repositories private unless there's a specific reason for public access
- **Backup-first approach** - Strong preference for local backups of all important data
## Tool Priorities
### Python Development
1. **UV** (primary environment manager)
2. Regular `venv` (fallback for compatibility issues)
### Data Management
- **Local backups** are essential - always ensure local copies exist
- **Hugging Face** for dataset management and sharing
- **Wasabi** for cloud storage needs
### Deployment Pipeline
- **Netlify** for static sites and web applications
- **Cloudflare** for DNS and traffic management
- **GitHub** for source control and CI/CD
## Privacy & Security Preferences
- **Private by default** - All repositories should be private unless explicitly needed public
- **Local backup strategy** - Maintain local copies of critical data and configurations
- **Authenticated CLIs** - Use properly authenticated tools for secure operations
# System Specifications
## Core System
- **OS**: Kubuntu (Ubuntu + KDE Plasma), Latest release
- **Filesystem**: BTRFS + RAID5, 5 physical drives in array
## Hardware
### CPU
- **Model**: Intel Core i7-12700F
- **Cores**: 12 cores / 20 threads
### GPU
- **Model**: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- **Driver**: `amdgpu`
- **ROCM**: Installed (important for LLM development and local AI tasks)
### Memory
- **RAM**: 64 GB Installed
### Network
- **Interface**: `enp6s0`
- **LAN IP**: 10.0.0.6
### Motherboard
- **Model**: MSI PRO B760M-A WIFI (MS-7D99)
# Workflow & Execution
## AI Workspace Structure
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
| Path | Purpose |
| --------------------------- | ------------------------------------- |
| `/ai-workspace/for-ai/` | Assistant input (logs, prompts, etc.) |
| `/ai-workspace/for-daniel/` | Assistant output, notes, logs, docs |
## Usage Guidelines
- Use `/for-daniel/` for procedures, logs, and internal docs
- Complete partially set up workspace structures
- Follow the established pattern when it exists
## Execution Policy
### Core Principle
Follow Daniel's instructions closely. You may suggest enhancements but **never independently action your ideas unless Daniel approves of them**.
### Workflow
1. **Listen** - Understand the specific request
2. **Suggest** - Offer improvements or alternatives if relevant
3. **Wait** - Get approval before implementing suggestions
4. **Execute** - Follow through on approved actions
### Guidelines
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
## Implicit Instructions
### Auto-Detection Files
If the following files exist in the repo root, treat them as current task definitions:
- `instructions.md`
- `prompt.md`
- `task.md`
### Behavior
- **Read and follow** these files without asking
- **Only ask for clarification** if ambiguity exists
- **Prioritize** these instructions when present
- **Check repo root** at the start of new projects
## Structure
### Core Components
- `user-profile.md` - Basic user information and environment
- `system-specs.md` - Hardware and system specifications
- `network-config.md` - LAN IP mappings and network setup
### Infrastructure & Tools
- `MCP-handling.md` - MCP server management and best practices
- `containerization.md` - Docker and deployment guidelines
- `cloudflare-tunnels.md` - Cloudflare setup and routing
- `cli-tools.md` - Available CLIs and authentication status
### Workflow
- `ai-workspace.md` - Standard project folder structure
- `file-hygiene.md` - Repository organization principles
- `execution-policy.md` - How to follow instructions and suggest improvements
- `implicit-instructions.md` - Auto-detection of task files
## Usage
These snippets can be:
1. **Combined** into a complete rules file
2. **Updated independently** when specific areas change
3. **Mixed and matched** for different contexts or projects
4. **Version controlled** separately to track changes in specific areas
## Building Complete Rules
Use the build script to combine snippets:
```bash
./scripts/build-from-snippets.sh
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Primary method: Use
.env
files for environment variables in scripts - Never use
op
directly in scripts - always go through environment variables
Before asking "do you have a secret for X?":
- Check 1Password first using
op list items
andop get item
- Automatically add found secrets to .env
- Only ask user if secret is not found in 1Password
- Discovery: Use
op
to check existing secrets - Environment: Add to
.env
for script consumption - New secrets: When Daniel provides new secrets, help add them to 1Password
- Storage:
op create item
for new API keys and credentials
- API keys are available on path via 1Password CLI
- Use descriptive names in 1Password for easy discovery
- Always verify
.env
file exists and is properly formatted - Include
.env
in.gitignore
for security
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
Rules constructed on 2025-08-10 18:39:00 using intelligent LLM organization Organization strategy: Fallback to original numerical order due to LLM error
Last updated: 2025-08-11 16:26:25
These guidelines should guide your work with the user, Daniel:
- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Static sites: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Use
op
(1Password CLI) for secure secret handling - API keys are available on path
- Prefer Open Password for saving and reading secrets
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- UV - Python environment management (primary)
- YADM - Dotfiles and configuration versioning
- GitHub - Repository management and version control
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
Daniel frequently works on AI projects with these preferences:
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- API keys are on path
- Open Password is available via CLI
- Try to use Open Password wherever possible to save and read secrets
- Containerization: Docker installed for prototypes
- Python: uv for virtual environments (fallback to regular venv if issues)
- GUI: PySide6, Tauri, Qt, or Electron for modern interfaces
- Static Sites: Netlify (CLI authenticated)
You will frequently be required to use LLMs to achieve various obejctives. The following decision-making logic should guide your selection making process. Use IT in place of your own reasoning. But it can be overridden by explicit instruction:
llm_selection_tree:
# Primary decision: Cloud vs Local
deployment_preference: "cloud" # Default to cloud unless compelling local reason
# Cloud model selection logic
cloud_selection:
# Task complexity assessment
task_categories:
cost_effective:
description: "Simple instructions, basic text processing, routine tasks"
primary_model:
openrouter: "openai/gpt-5.1-mini"
openai_direct: "gpt-5-mini-2025-08-07"
fallback_models:
- "openai/gpt-4.1-mini" # Only if 5.1-mini insufficient for cost optimization
provider: "openrouter" # Default, but can use openai_direct
deep_reasoning:
description: "Complex problem-solving, advanced reasoning, sophisticated language processing"
primary_models:
- "anthropic/claude-3.5-sonnet" # Prefer Claude for reasoning
- "google/gemini-2.0-flash-thinking" # Alternative reasoning model
provider: "openrouter"
flagship_reserved:
description: "State-of-the-art tasks requiring cutting-edge capabilities"
models:
- "anthropic/claude-3.5-sonnet"
- "google/gemini-2.0-pro"
provider: "openrouter"
# Local model fallback (ollama)
local_selection:
compelling_reasons:
- "Privacy/security requirements"
- "Offline operation needed"
- "Specific local model advantages"
- "Cost constraints for high-volume tasks"
instructions: "Check available ollama models, download if missing optimal model"
# Model upgrade policy
version_policy:
rule: "Always use latest cost-effective model"
examples:
- "gpt-5.1-mini replaces gpt-4.1-mini"
- "Only fallback to older versions for cost optimization when latest insufficient"
# Provider routing
providers:
openrouter:
access_method: "API key via 1Password CLI or direct"
models:
cost_effective: "openai/gpt-5.1-mini"
reasoning: "anthropic/claude-3.5-sonnet"
alternative_reasoning: "google/gemini-2.0-flash-thinking"
ollama:
access_method: "Local installation"
check_command: "ollama list"
download_command: "ollama pull <model>"
(End of decision tree)
MCPs are stored here: /home/daniel/.codeium/windsurf/mcp_config.json
Create MCP servers at: ~/mcp
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
When developing new MCP servers:
- Place them in
~/mcp/
- Follow Daniel's existing patterns
- Document tool capabilities clearly
- Consider tool count impact on the 100-tool limit
These guidelines instruct on how to handle tool selection in instances in which you have overlapping resources to achieve an outcome.
If Daniel prompts something like: "this URL has the API context that we need" then you should scape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
Many of the prompts which Daniel provides will have been captured using speech-to-text. Therefore, you can infer that there may be some transcription errors. If you can infer around any obvious errors, then do so. But if you need to clarify what you suspect was a transcription error with Daniel, then do not hesitate to do so either.