Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
desktops Requirements Demonstrable experience within consulting/managed service environments Strong experience in building, configuring and optimising the Data Lakes environment Experience in Landing Zones/Transit Gateways/Redshift/Firehose/CloudTrail/Workspaces Experience within Linux based environments Strong understanding of AWS Data Lake solutions and AWS Redshift AWS Certified Solutions Architect, Developer, or SysOps More ❯
collaborate directly with clients to shape strategy, drive delivery, and guide internal engineering standards. Your responsibilities: Build and maintain large-scale data lakes and ETL pipelines using AWS S3, Redshift, Glue, Lambda, DynamoDB, and Matillion Translate client requirements into scalable and secure data architectures Drive infrastructure-as-code and CI/CD deployment practices Process structured and semi-structured … in fast-paced, high-value engagements This Principal Data Engineer will bring: Extensive experience with ETL/ELT pipelines and data transformation patterns Proficiency in AWS cloud services , particularly Redshift, Glue, Matillion, and S3 Strong command of data quality, data lineage, and metadata practices Fluency in database technologies (both relational and NoSQL) Experience with Linux environments and data visualisation More ❯
City of London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
desktops Requirements Demonstrable experience within consulting/managed service environments Strong experience in building, configuring and optimising the Data Lakes environment Experience in Landing Zones/Transit Gateways/Redshift/Firehose/CloudTrail/Workspaces Experience within Linux based environments Strong understanding of AWS Data Lake solutions and AWS Redshift AWS Certified Solutions Architect, Developer, or SysOps More ❯
to scalable ML workflows that ensure fast iteration, effective monitoring, and smooth deployment of AI features. Working with additional tools and platforms (bonus) Familiarity with Claude, Anzo or AWS Redshift is a plus - and you’ll have the opportunity to deepen your expertise on the job. What you'll need to succeed... Experience working with cloud infrastructure, especially AWS … on scalability and performance. A desire to collaborate in a cross-functional environment, sharing ideas and learning as part of a team. Bonus points for experience with Claude, AWS Redshift, or graph-based systems like Anzo. So, what's in it for you? The chance to shape the next generation of AI-powered enterprise tools with real business impact. More ❯
scalable, automated solutions. Provide analysis and insights to support decision-making. Collaborate with Data Engineering and Analytics teams to improve data sources and processes. Experience Requirements: Data analysis with Redshift, Oracle, NoSQL, etc. Data visualization skills with Tableau, Quicksight, or similar. Data modeling, warehousing, and ETL pipeline development. Statistical analysis with R, SAS, Matlab. SQL and scripting with Python More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
modern data platforms for large organizations. Expect variety – you’ll work across cloud platforms like Azure, AWS, and GCP, leveraging tools such as Databricks, Data Factory, Synapse, Kafka, Glue, Redshift, BigQuery, and more. About You You’re an engineer at heart, with a passion for building efficient, scalable data systems. To succeed, you’ll need: Proficiency in object-oriented More ❯
Sheffield, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
modern data platforms for large organizations. Expect variety – you’ll work across cloud platforms like Azure, AWS, and GCP, leveraging tools such as Databricks, Data Factory, Synapse, Kafka, Glue, Redshift, BigQuery, and more. About You You’re an engineer at heart, with a passion for building efficient, scalable data systems. To succeed, you’ll need: Proficiency in object-oriented More ❯
the data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery … Strong mentoring skills and ability to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data-facing products Knowledge of More ❯
the data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery … Strong mentoring skills and ability to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data-facing products Knowledge of More ❯
the data platform, including data pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural input, prototyping, or critical delivery … Strong mentoring skills and ability to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external data-facing products Knowledge of More ❯
architecture, with at least 3 years in the insurance industry . Strong understanding of London Market and specialty insurance operations and data flows. Proven experience with AWS (S3, Glue, Redshift, Lambda, etc.), Databricks , and Snowflake . Expertise in building and optimizing medallion architecture . Solid knowledge of data governance , security , and compliance frameworks. Experience with ETL/ELT tools More ❯
architecture, with at least 3 years in the insurance industry . Strong understanding of London Market and specialty insurance operations and data flows. Proven experience with AWS (S3, Glue, Redshift, Lambda, etc.), Databricks , and Snowflake . Expertise in building and optimizing medallion architecture . Solid knowledge of data governance , security , and compliance frameworks. Experience with ETL/ELT tools More ❯
/D Inside IR35. Key Responsibilities Architect, implement, and manage infrastructure using Terraform , ensuring security, scalability, and reliability. Configure and optimize AWS services such as EC2, S3, Lambda, IAM, Redshift, and VPC to support business needs. Develop and maintain CI/CD pipelines using Git/GitLab and Jenkins for automated deployments and testing. Apply DevOps methodologies to streamline More ❯
driven culture within the organisation Essential Criteria - Bachelor's degree in Statistics, Mathematics, Computer Science, or a related quantitative discipline - 7+ years of experience with advanced SQL (Snowflake, BigQuery, Redshift, Oracle, PostgreSQL, MSSQL, etc.) - 5+ years of experience with reporting/visualization tools (Looker, Tableau, Power BI, etc.) - Strong knowledge of Looker/LookML highly desirable - Deep understanding of More ❯
libraries and cloud software development kits (APIs). Optimise performance, data storage and retrieval processes for efficiency and scalability. Ensure the data lake and Online Analytical Processing DB (OLAP - Redshift or similar) is able to handle large volumes and highly concurrent data acess. Ensure data quality, security, and compliance with industry standards. Requirements: Proven experience in data engineering, with More ❯
Skills Essential You have in-depth knowledge of SQL and Python to build, monitor and maintain data pipelinesExperience with cloud infrastructure (e.g. AWS Lambda, S3, EC2, API Gateway, Airflow, Redshift).Familiarity with CI/CD tools and version control systems (e.g., GitHub Actions).You have an understanding of Infrastructure as Code (e.g. CloudFormation)Interest in developing analytical skills using More ❯
Meshes and Data Warehouses. Experience of a wide range of data sources, SQL, NoSQL and Graph. A proven track record of infrastructure delivery on any data platform (Snowflake, Elastic, Redshift, Data Bricks, Splunk, etc.). Strong and demonstrable experience writing regular expressions and/or JSON parsing, etc. Strong experience in log processing (Cribl, Splunk, Elastic, Apache NiFi etc. More ❯
tools such as Informatica PowerCenter and BDM, AutoSys, Airflow, and SQL Server Agent. Experience with cloud platforms preferably AWS. Strong knowledge of AWS cloud services, including EMR, RDS Postgres, Redshift Athena, S3, and IAM. Solid understanding of data warehousing principles and best practices. Strong proficiency in SQL for data manipulation, reporting, and optimization. Knowledge of data modeling and schema More ❯
West Bromwich, England, United Kingdom Hybrid / WFH Options
Search Allies
with GDPR and industry regulations. What We’re Looking For 4+ years of experience in Python development (Flask, Django). Strong expertise in AWS cloud environments (EC2, S3, RDS, Redshift, Athena). Experience with SQL databases (PostgreSQL, SQLAlchemy). Knowledge of ETL pipelines, API development, and front-end technologies (HTML, CSS). Experience with Infrastructure as Code tools (Terraform More ❯
/BI Engineering experience Excellent SQL skills Understanding of data warehousing, data modelling concepts and structuring new data tables Knowledge of cloud-based MPP data warehousing (e.g. Snowflake, BigQuery, Redshift) Nice to have Experience developing in a BI tool (Looker or similar) Good practical understanding of version control SQL ETL/ELT knowledge, experience with DAGs to manage script More ❯
this role, you will have the opportunity to: Build and optimize data products and pipelines from various sources, ensuring data availability, quality, and reliability. Administer and maintain databases (Snowflake, Redshift), ensuring optimal performance and reliability, with a focus on the design, implementation and management of data models, schemas and tables. Manage user access, roles, and permissions. Support data needs More ❯
the insurance industry. Deep understanding of London Market and specialty insurance operations, including end-to-end data flows. Proven hands-on expertise with AWS services such as S3, Glue, Redshift, and Lambda, as well as Databricks and Snowflake. Skilled in designing and optimizing medallion architecture for scalable and efficient data processing. Strong grasp of data governance, security, and compliance More ❯
collaboration skills across departments for issue triaging and resolution. Team-oriented attitude and enthusiasm for work. Highly Desirable Skills & Experiences Experience supporting cloud services in production. Familiarity with Snowflake, Redshift, BigQuery. Knowledge of GCP and AWS. Startup experience is a plus. About Us Sigma is a cloud analytics and business intelligence platform that empowers teams to explore data independently More ❯
such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data architectures that support batch and streaming paradigms Experience with standards such as JSON, XML, YAML, Avro, Parquet Strong communication skills Open to More ❯
Strong experience working with event based architecture with strong proficiency with Kafka Proficiency in programming languages such as Java or Kotlin. Strong experience with data warehousing technologies (e.g., Snowflake, Redshift) and proficiency in SQL and JOOQ Query Generation. Experience of working in an event-driven architecture environment and cloud platforms (e.g, AWS, Azure, GCP). Familiarity with data governance More ❯