Building intelligent systems using LLMs, RAG pipelines, and scalable cloud infrastructure
- π€ AI Engineer focused on Generative AI and LLM applications
- π§© Building RAG pipelines, AI APIs, and intelligent developer tools
- βοΈ Working with AWS, Azure, and cloud-native AI architectures
- π³ Deploying AI services with Docker, Kubernetes, and CI/CD
- β‘ Experience with Go, Kubernetes, and CloudEvents from my time at TriggerMesh
AI & LLM Systems
- Retrieval Augmented Generation (RAG)
- LLM Application Development
- Prompt Engineering
- Agentic Workflows (LangGraph, CrewAI)
- LLMOps & Tracing (LangSmith)
- AI APIs & Model Integrations
- Vector Search & Semantic Retrieval
Backend for AI
- Python
- FastAPI
- REST APIs for AI services
- Async processing & microservices
Cloud & Infrastructure
- AWS
- Azure
- Docker
- Kubernetes
- Linux
- CI/CD pipelines
- LLM APIs
- LangChain
- LangGraph
- LangSmith
- CrewAI
- RAG Architecture
- Vector Databases
β Currently exploring Agentic AI systems, RAG architectures, and cloud-native AI platforms




