selflink-backend/
│
├── apps/
│ ├── config/ # feature flags + runtime config cache
│ ├── core/ # shared API router, base models, pagination
│ ├── feed/ # timelines, ranking, fan-out tasks
│ ├── matrix/ # astrology + numerology data sources
│ ├── media/ # uploads, presigned URLs, media policies
│ ├── mentor/ # AI mentor sessions, prompts, memory store
│ ├── messaging/ # threads, direct messages, typing state
│ ├── moderation/ # safety rules, reports, enforcement
│ ├── notifications/ # in-app/push/email dispatch + preferences
│ ├── payments/ # plans, wallet, Stripe integrations
│ ├── reco/ # recommendation features + scoring
│ ├── search/ # OpenSearch clients, indexing tasks
│ ├── social/ # posts, comments, reactions, gifting
│ └── users/ # auth, profiles, privacy controls
│
├── services/
│ ├── realtime/ # FastAPI WebSocket gateway w/ Redis pub-sub
│ └── reco/ # worker processes for advanced ranking
│
├── core/ # Django project: settings, ASGI/WSGI, Celery
│ ├── settings/ # base.py, dev.py, prod.py
│ ├── urls.py
│ ├── asgi.py
│ ├── wsgi.py
│ └── celery.py
│
├── config/ # fixtures + seed data consumed by manage.py
│ └── fixtures/
│
├── infra/ # Docker/K8s definitions + dev Make targets
│ ├── docker/
│ ├── compose.yaml
│ ├── k8s/
│ └── Makefile
│
├── libs/ # shared helpers (ID generator, LLM adapters)
│ ├── idgen.py
│ ├── llm/
│ └── utils/
│
├── tests/ # API + service regression suites
├── manage.py
├── requirements.txt
├── .env.example
└── README.md
-
Create a virtualenv and install dependencies:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt -
Copy
.env.exampleto.envand adjust secrets. -
Run migrations and start the server:
python manage.py makemigrations python manage.py migrate python manage.py runserver
-
Celery worker & beat (optional for dev):
celery -A core worker -l info celery -A core beat -l info
The infra bundle under infra/ provisions Postgres, Redis, OpenSearch, MinIO, Django API, Celery workers, and the realtime gateway.
make -C infra up # build & start services
make -C infra logs # follow logs
make -C infra down # stop stack
make -C infra migrate # run python manage.py migrate inside the api containerManual equivalent with docker-compose (run from repo root):
sudo docker-compose -f infra/compose.yaml down
sudo docker-compose -f infra/compose.yaml up -d --build
sudo docker-compose -f infra/compose.yaml logs -f api
sudo docker-compose -f infra/compose.yaml exec api python manage.py migrateAfter the stack is running the API is available on http://localhost:8000, realtime gateway on ws://localhost:8001/ws, Postgres on localhost:5432, Redis on localhost:6379, OpenSearch on localhost:9200, and MinIO console on http://localhost:9001.
- OpenSearch connection is managed by
apps.search.client. SetOPENSEARCH_ENABLED=falseto fall back to relational lookups. - Indexing is triggered via Celery tasks (
apps/search/tasks.py) fired from model signals. Run a worker/beat (celery -A core worker -l info) to keep indices fresh.
- Lightweight scoring logic lives in
services/reco/engine.py. Celery taskrebuild_user_timeline_taskrebuilds materialized feeds. - Follow/unfollow actions enqueue rebuilds; schedule periodic refreshes via Celery beat if needed.
- Sample API tests reside in
tests/test_api.py. Run withpython manage.py testafter generating migrations (python manage.py makemigrations). - Feature flag tests in
tests/test_feature_flags.pyexercise the new flag service.
- Seed the database with demo users, posts, and baseline plans via
python manage.py seed_demo. - Use
python manage.py seed_demo --resetto purge existing demo users before reseeding.
python manage.py bootstrap_adminprovisions a superuser (configurable via--email/--password) and sets up moderation/support groups along with baseline feature flags.python manage.py load_fixturesloads JSON fixtures fromconfig/fixtures/(override with--path).python manage.py refresh_soulmatch_profilesrecomputes compatibility profiles (use--user <id>for a single user).
- Environment vars:
MENTOR_LLM_ENABLED(true|false) toggles use of the pluggable LLM client.MENTOR_LLM_PROVIDER(openai|ollama|mock) selects the provider;mockreturns canned replies for local dev.MENTOR_LLM_MODEL,OPENAI_API_KEYconfigure the OpenAI client.OLLAMA_HOSTpoints to a local Ollama service (defaults tohttp://localhost:11434).MENTOR_LLM_TIMEOUTcontrols request timeout (seconds).
- Mentor sessions persist per-user memory (
apps.mentor.models.MentorMemory) to inform future responses. - To run a local model with Ollama: install Ollama, run
ollama serve, pull a model (e.g.,ollama pull llama3), then setMENTOR_LLM_PROVIDER=ollamaand restart the backend.
- WebSocket gateway (
services/realtime) now fans out events via Redis pub/sub. ConfigureREALTIME_REDIS_URL(defaults toredis://localhost:6379/1). - Django publishes message events to per-user channels (
user:<id>); multiple gateway instances stay in sync through Redis. - If Redis is unavailable, the system falls back to in-process broadcasting and logs warnings.
- Messaging events also create in-app notifications (
apps/notifications/services.py). Push/email delivery is stubbed and can be wired to real providers later. - Typing indicators:
POST /api/v1/threads/<id>/typing/toggles state;GETreturns active typing user IDs. Events broadcast to other participants over WebSocket channels.
- Feature flag
FEATURE_SOULMATCH(default true) controls availability. Override via environment variableFEATURE_SOULMATCH=false. python manage.py refresh_soulmatch_profilesrecomputes compatibility profiles (use--user <id>for a single user).python manage.py rebuild_soulmatch_scorescomputes pairwise scores. API lives at/api/v1/soulmatch/(list) and/api/v1/soulmatch/refresh/(manual refresh).
- Feature flag
FEATURE_PAYMENTS(default true) controls availability. - Configure
STRIPE_API_KEY,STRIPE_WEBHOOK_SECRET,PAYMENTS_CHECKOUT_SUCCESS_URL, andPAYMENTS_CHECKOUT_CANCEL_URLin your environment. - Set
Plan.external_price_idto the Stripe price ID.POST /api/v1/payments/subscriptions/returns acheckout_urlandsession_idfor Stripe Checkout. - Receive Stripe webhooks at
/api/v1/payments/stripe/webhook/to update subscription status.
- API throttles default to
THROTTLE_USER_RATE=120/minandTHROTTLE_ANON_RATE=60/min(override via env vars). - Write-heavy endpoints (posts, comments, messages, mentor asks) have additional per-user limits enforced via
django-ratelimit. - Users can control notification delivery (push/email/digest) and quiet hours via
PATCH /api/v1/users/me/settings payload (push_enabled,email_enabled,digest_enabled,quiet_hours). - Moderation APIs: regular users file reports via
/api/v1/moderation/reports/. Staff (groupmoderation_teamor admin) manage reports at/api/v1/moderation/admin/reports/and enforce actions via/api/v1/moderation/enforcements/. - Auto-flagging: configure
MODERATION_BANNED_WORDS(comma separated) to auto-create moderation reports for posts/messages containing banned terms.
- Prometheus metrics exposed at
/metricsviadjango-prometheus; run a Prometheus instance or forward metrics from that endpoint. - Structured JSON logging enabled by default (env
APP_LOG_LEVELto adjust app logger verbosity).
backand.md— End-to-end product blueprint dated 2025-10-29 covering vision, differentiators, architecture, and data models.contrinutors.md— Contribution guide with project values, workflow expectations, upgrade checklists, and coding/testing standards.README_for_env.md— How-to for.envmanagement plus line-by-line explanations of every environment variable the stack consumes.