Consultant AWS Data Engineer IRC281939
Description We are seeking an experienced Data Engineer to join our team to build and maintain scalable, production-grade data infrastructure across AWS, while contributing to our strategic platform modernization efforts toward Databricks. This role requires technical independence, strong stakeholder engagement, and a commitment to data quality and security. You’ll design, implement, and optimise the systems responsible for collecting, storing, processing, and analyzing our critical data assets. You’ll build strategic data engineering pipelines on AWS and support platform migration efforts to Databricks. Requirements
- Strong hands-on experience delivering production-grade data engineering solutions.
- Expert-level SQL and proficiency in Python/PySpark; capable of building reusable, scalable code.
- Experience with modern orchestration tools (e.g., Airflow, AWS Step Functions).
- Experience with Infrastructure as Code (CloudFormation), Unit Testing, and Gitlab.
- Hands-on experience on the AWS Cloud Platform including: S3, Lambda, Glue, Step Functions, Athena, ECS, IAM, KMS, etc.
- Strong Experience in Databricks development/migration, including Apache Spark, Delta Lake, Unity Catalog, and MLflow.
- Ability to engage with stakeholders, understand business requirements, and translate them effectively into technical design.
- AI/ML knowledge is desirable.
- Design, implement, and maintain robust data pipelines that ensure the transfer and processing of durable, complete, and consistent data.
- Develop scalable Data Warehouses and Data Lakes that manage high volumes and adhere to required security standards.
- Collaborate with Data Scientists to build and deploy Machine Learning models.
- Proactively analyse existing processes, recommend improvements, and drive technical solutions end-to-end.