Data Engineer
Data Engineer
6 Month Contract
Outside IR35
Fully Remote
£350/Day
About the Role
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data platforms that support analytics, reporting, and machine learning initiatives. You will work closely with Analytics, Product, Engineering, and Business teams to ensure reliable and high-quality data is available across the organization.
The ideal candidate has strong SQL and Python skills, experience with modern data stack technologies, and a solid understanding of data modeling, ETL/ELT processes, and cloud infrastructure.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT pipelines
- Build and optimize data models and data warehouse architectures
- Integrate data from multiple internal and external sources
- Ensure data quality, integrity, security, and governance standards
- Monitor and troubleshoot data pipeline performance and failures
- Collaborate with analysts, data scientists, and software engineers to support business requirements
- Improve data platform scalability, reliability, and cost efficiency
- Automate manual data processes and workflows
- Document data systems, transformations, and operational procedures
- Support real-time and batch data processing solutions
Required Skills & Experience
- 3+ years of experience in Data Engineering or related roles
- Strong SQL skills and experience optimizing complex queries
- Proficiency in Python for data processing and automation
- Experience with ETL/ELT pipeline development
- Hands-on experience with cloud platforms such as AWS, Azure, or GCP
- Experience with data warehousing technologies such as Snowflake, Redshift, BigQuery, or Synapse
- Familiarity with orchestration tools such as Airflow or Dagster
- Experience with big data processing frameworks such as Spark or Databricks
- Understanding of data modeling concepts and best practices
- Experience working with Git and CI/CD workflows
Preferred Qualifications
- Experience with streaming technologies such as Kafka or Kinesis
- Experience with dbt and modern data stack tools
- Knowledge of infrastructure-as-code tools such as Terraform
- Experience supporting machine learning or AI data workflows
- Familiarity with data governance and security best practices
- Exposure to containerization technologies such as Docker and Kubernetes
Tech Stack
- Python
- SQL
- Airflow
- Spark / Databricks
- Snowflake / BigQuery / Redshift
- AWS / Azure / GCP
- dbt
- Kafka
- GitHub Actions / CI-CD
If this sounds like you, apply now or get in touch via email pgeorge@pg-rec.com