Data Engineer Active SC essential Croydon - up to 1 day per week £500/day outside until the end of June 2026 Please note, this in an outside IR-35 position on a Statement of Work basis, ideally we would prefer candidates with experience working to those arrangements. Essential skills: AWSData Engineering Snowflake … Databricks Responsibilities: Develop and manage data models, schemas, and metadata to support analytics and reporting needs. Collaborate with Data Analysts, Scientists, and Business Analysts to ensure data availability and usability. Implement data quality checks, validation routines, and monitoring to ensure data integrity and reliability. Optimise data workflows for … performance, cost-efficiency, and maintainability using tools such as Azure Data Factory, AWSDataPipeline for Data Orchestration, Databricks, or Apache Spark. Support the integration of data into visualisation platforms (e.g. Power BI, ServiceNow) and other analytical environments. Ensure compliance with data governance, security, and privacy More ❯
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
JNBentley
SQL Developer and want to take your career further in a thriving office environment? Are you a strong communicator who works well within a team? Do you understand datapipeline tools and data modelling? We're seeking an SQL Developer to join our expanding Database team in Leeds! In the role, you will: Analyse … business requirements through collaboration with stakeholders, IT Management, and the user community Develop new systems, integrate third-party platforms, and maintain existing applications Use data to deliver business improvements and support informed decision-making Work with stakeholders on data extraction and manipulation tasks Write and optimise in-application SQL statements Provide user support for report errors … debugging, and issue resolution Integrate new datasets into the data warehouse Create and maintain SQL data feeds across internal, partner, and supplier systems Design, build, and maintain robust ETL/ELT pipelines Preferably support app development, integration, and data reporting/injection using .Net (C#/VB/Java/REST) Candidate Specification: We More ❯
SR2 are supporting a large central government data modernisation programme seeking two Engineers/DevOps Engineers to enhance and secure a cloud-based data platform built on AWS . The work spans an AmazonData Warehouse environment and a Fast DataPipeline ingesting and transforming data from multiple government and third-party sources. Key Responsibilities: Catalogue and remediate high-priority vulnerabilities across application, infrastructure, and pipeline layers. Enhance CI/CD pipelines, container orchestration, and monitoring across AWS and Kubernetes. Build automation around data ingestion, transformation, and feed management. Improve operational resilience and graceful failure handling across services. More ❯
At Carda Health, we've reimagined rehab. Our program allows patients to complete inspiring, convenient, life-saving therapy remotely. Who are we? We are a team of clinicians, data scientists, mathematicians and repeat entrepreneurs. And a few recovering financiers. Our belief is that technology and data, when applied to the right problem, transforms people's lives … the business who have also backed the likes of Livongo, Hinge, Calm, MDLive, and others. What we're looking for We're looking for a Machine Learning/Data Engineer to lead the development of ML applications and data infrastructure that will accelerate our ability to gather actionable, high-impact insights from complex healthcare data. We … re looking for someone who is effective at all levels of the machine learning and data stack, with a track record of delivering applications from the ground up. You'll lead research, prototyping, and deployment of models that risk stratify patients, predict readmissions, surface timely interventions for our clinicians, and more. As one of our early dataMore ❯
Use modern cloud-native tools and CI/CD platforms to maintain secure, observable, and high-performing environments. Relevant Skills: Essential Skills: Proficiency in Python or Scala for data processing Strong SQL skills for querying and managing relational databases Experience with AWS services including S3, Glue, Redshift, Lambda, and Athena Knowledge of ETL processes and datapipeline development Understanding of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWSMore ❯