About Me

I'm a Database Administrator & Data Engineer who loves building reliable, scalable data systems that actually work in production

What I Do

I'm a Database Administrator and Data Engineer who specializes in designing and maintaining high-performance database systems and data pipelines. My toolkit includes relational databases (PostgreSQL, SQL Server, MySQL), distributed streaming platforms (Kafka), and modern data orchestration tools (Apache Airflow).

Over my career, I've tackled some tough problems—like implementing high-availability PostgreSQL clusters with 99.99% uptime, and building real-time CDC pipelines that process millions of events every day. I understand database internals, replication mechanisms, and performance optimization in a way that comes from getting my hands dirty with real systems.

My approach blends technical depth with practical focus. I build systems that are fast, yes—but also maintainable, observable, and resilient when things go wrong. Every architecture choice I make considers long-term scalability and what it'll be like for the team that has to run it at 3am.

Areas of Expertise

What I'm really good at in database systems and data engineering

Database Administration

PostgreSQL and SQL Server administration, backup/recovery, security, monitoring, and capacity planning.

Performance Tuning

Query optimization, indexing strategies, execution plan analysis, and system configuration tuning for maximum performance.

High Availability

Designing fault-tolerant clusters with replication, automatic failover, and load balancing for mission-critical systems.

Data Streaming & CDC

Real-time data pipelines using Kafka and Debezium for change data capture and event-driven architectures.

ETL/ELT Pipelines

Building scalable data pipelines with Apache Airflow, Python, and modern orchestration frameworks.

Cloud & DevOps

Experience with AWS, Azure, Docker, Kubernetes, and infrastructure as code for database deployments.

How I Think About Engineering

Reliability First

I put reliability and data integrity first in every architecture decision. Systems should fail gracefully and recover on their own. For transactional systems, zero data loss isn't optional—it's the baseline.

Observability Matters

You can't fix what you can't see. I believe comprehensive monitoring, logging, and alerting aren't optional. Every system I build provides clear insights into its health and performance.

Automation & Efficiency

I automate repetitive tasks to cut down human error and give people time for higher-value work. Infrastructure as code, automated testing, and CI/CD are table stakes in my book.

Documentation Matters

Good documentation is as important as good code. I make sure architecture decisions, runbooks, and operational procedures are well-documented so teams can share knowledge and get up to speed quickly.

Technologies I Work With

The tools I use regularly

Databases

PostgreSQLSQL ServerMySQLRedisMongoDBElasticsearch

Data Engineering

Apache AirflowKafkaDebeziumdbtApache SparkPython

Infrastructure

LinuxDockerKubernetesTerraformAnsibleAWS