- Experience in data engineering, with expertise in SQL, BigQuery, and GCP.
- Strong experience with Apache Iceberg, Starburst, and Trino for large-scale data processing.
- Proven track record of designing and optimizing ETL/ELT pipelines and cloud-based data workflows.
- Proficiency in SQL, including query optimization and performance tuning.
- Experience working with BigQuery, Google Cloud Storage (GCS), and GCP data services.
- Knowledge of data lakehouse architectures, data warehousing, and distributed query engines.
- Hands-on experience with Apache Iceberg for managing large-scale transactional datasets.
- Expertise in Starburst and Trino for federated queries and cross-platform data access.
- Familiarity with Python, Java, or Scala for data pipeline development.
- Experience with Terraform, Kubernetes, or Airflow for data pipeline automation and orchestration.