π Lead Data Engineer | Cloud & ETL Specialist | FinTech & Startups
I lead data teams in FinTech, focusing on modern data infrastructure, scalable ETL pipelines, and cloud-native solutions. Over the past decade, I've built several data platforms and teams, spanning full end-to-end solutions with engineers, data scientists, analysts, and ML engineers.
- Building scalable ETL pipelines using Dagster, dbt, and Kubernetes.
- Designing cloud-native data solutions with Azure & AWS.
- Developing API connectors with dlt, DuckDB, and Polars.
- Modernizing legacy Java data pipelines to Python-based architectures.
Programming & Query Languages:
Building scalable, modular data architectures, with a recent focus on:
- GitHub Data Processing pipelines utilizing Databricks Unity Catalog.
- High-performance API ingestion combining uv, dlt, DuckDB, and dbt.
- LLM-ready data platforms optimized for AI/ML workloads.
Built a scalable Kubernetes architecture leveraging dlt, dbt, and DuckDB to efficiently load data from Azure Blob Storage to Azure SQL Server, creating a robust and maintainable data pipeline.
- github-trend-insights π β Data pipeline analyzing GitHub trends using dlt, DuckDB, and dbt.
- agile-ai β β AI-powered agile project management tools.
- Scalable Data Ingestion App π β Built an extendable API ingestion framework using uv, dlt, DuckDB, and pydantic, improving data integration efficiency.
- Automated Data Processing Pipelines π β Created dbt-driven transformation workflows, enabling structured reporting and analytics.
- Dagster & dbt on Kubernetes βοΈ β Designed a fully automated cloud-native data pipeline, reducing manual intervention and improving observability.
- πΌ LinkedIn
- π§ Email
- π GitHub
- π Personal Blog (archives of data engineering insights)





