and orchestration tools like Kubernetes. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Knowledge of data engineering and experience with big data technologies like Hadoop, Spark, or Kafka. Experience with CI/CD pipelines and automation, such as using Jenkins, GitLab, or CircleCI. ABOUT BUSINESS UNIT IBM Consulting is IBM's consulting and global professional More ❯
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and ApacheSpark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD … experience: Experience with efficient, reliable data pipelines that improve time-to-insight. Knowledge of secure, auditable, and compliant data workflows. Know how on optimising performance and reducing costs through Spark and Databricks tuning. Be able to create reusable, well-documented tools enabling collaboration across teams. A culture of engineering excellence driven by mentoring and high-quality practices. Preferred Experience … Databricks in a SaaS environment, Spark, Python, and database technologies. Event-driven and distributed systems (Kafka, AWS SNS/SQS, Java, Python). Data Governance, Data Lakehouse/Data Intelligence platforms. AI software delivery and AI data preparation. More ❯
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and ApacheSpark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD … experience: Experience with efficient, reliable data pipelines that improve time-to-insight. Knowledge of secure, auditable, and compliant data workflows. Know how on optimising performance and reducing costs through Spark and Databricks tuning. Be able to create reusable, well-documented tools enabling collaboration across teams. A culture of engineering excellence driven by mentoring and high-quality practices. Preferred Experience … Databricks in a SaaS environment, Spark, Python, and database technologies. Event-driven and distributed systems (Kafka, AWS SNS/SQS, Java, Python). Data Governance, Data Lakehouse/Data Intelligence platforms. AI software delivery and AI data preparation. More ❯
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and ApacheSpark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD … experience: Experience with efficient, reliable data pipelines that improve time-to-insight. Knowledge of secure, auditable, and compliant data workflows. Know how on optimising performance and reducing costs through Spark and Databricks tuning. Be able to create reusable, well-documented tools enabling collaboration across teams. A culture of engineering excellence driven by mentoring and high-quality practices. Preferred Experience … Databricks in a SaaS environment, Spark, Python, and database technologies. Event-driven and distributed systems (Kafka, AWS SNS/SQS, Java, Python). Data Governance, Data Lakehouse/Data Intelligence platforms. AI software delivery and AI data preparation. More ❯
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and ApacheSpark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD … experience: Experience with efficient, reliable data pipelines that improve time-to-insight. Knowledge of secure, auditable, and compliant data workflows. Know how on optimising performance and reducing costs through Spark and Databricks tuning. Be able to create reusable, well-documented tools enabling collaboration across teams. A culture of engineering excellence driven by mentoring and high-quality practices. Preferred Experience … Databricks in a SaaS environment, Spark, Python, and database technologies. Event-driven and distributed systems (Kafka, AWS SNS/SQS, Java, Python). Data Governance, Data Lakehouse/Data Intelligence platforms. AI software delivery and AI data preparation. More ❯
london (city of london), south east england, united kingdom
Fimador
scalable pipelines, data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and ApacheSpark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD … experience: Experience with efficient, reliable data pipelines that improve time-to-insight. Knowledge of secure, auditable, and compliant data workflows. Know how on optimising performance and reducing costs through Spark and Databricks tuning. Be able to create reusable, well-documented tools enabling collaboration across teams. A culture of engineering excellence driven by mentoring and high-quality practices. Preferred Experience … Databricks in a SaaS environment, Spark, Python, and database technologies. Event-driven and distributed systems (Kafka, AWS SNS/SQS, Java, Python). Data Governance, Data Lakehouse/Data Intelligence platforms. AI software delivery and AI data preparation. More ❯
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
requirements and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable More ❯
EC4N 6JD, Vintry, United Kingdom Hybrid / WFH Options
Syntax Consultancy Ltd
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
warrington, cheshire, north west england, united kingdom
iO Associates
requirements and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable More ❯
bolton, greater manchester, north west england, united kingdom
iO Associates
requirements and provide technical guidance. Troubleshoot and resolve complex issues related to Databricks infrastructure. Essential Skills & Experience: Proven experience working with Databricks in a similar role. Strong knowledge of Spark, Scala, Python, and SQL. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in a fast-paced environment. Excellent problem-solving and communication skills. Desirable More ❯
is mandatory Experienced building components for enterprise central data platforms (data warehouses, Operational Data Stores, Access layers with APIs, file extracts, user queries) Hands-on experience with SQL, Python, Spark, Kafka Excellent communication skills, including a good working proficiency in verbal and written English About this Job Scopeworker Business Intelligence real time processes enterprise customer data. We develop and More ❯
requirements. In order to secure one of these Senior Data Engineer roles you must be able to demonstrate the following experience: Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
technical teams and stakeholders Effective problem-solver who takes initiative in complex production settings Experience with scientific computing, deep learning, big data, or health IT ontologies (e.g., PyTorch, JAX, Spark, HL7, FHIR) (desirable) Familiarity with cloud infrastructure (Azure/AWS), infrastructure as code, Kubernetes, Linux, Docker, data pipelines, and MLOps tools (desirable) Passion for biomedical topics and startup experience More ❯
oxford district, south east england, united kingdom
Llama Recruitment Solutions
technical teams and stakeholders Effective problem-solver who takes initiative in complex production settings Experience with scientific computing, deep learning, big data, or health IT ontologies (e.g., PyTorch, JAX, Spark, HL7, FHIR) (desirable) Familiarity with cloud infrastructure (Azure/AWS), infrastructure as code, Kubernetes, Linux, Docker, data pipelines, and MLOps tools (desirable) Passion for biomedical topics and startup experience More ❯
London, England, United Kingdom Hybrid / WFH Options
Client Server
GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with DevOps principles, containerisation and CI/CD tools such as Jenkins or GitHub More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Client Server Ltd
GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with DevOps principles, containerisation and CI/CD tools such as Jenkins or GitHub More ❯
london, south east england, united kingdom Hybrid / WFH Options
Client Server
GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with DevOps principles, containerisation and CI/CD tools such as Jenkins or GitHub More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Client Server
GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with DevOps principles, containerisation and CI/CD tools such as Jenkins or GitHub More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
stakeholders in a fast-paced environment Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Experience in processing large amounts of structured and unstructured data, including integrating data More ❯
well as programming languages such as Python, R, or similar. Strong experience with machine learning frameworks (e.g., TensorFlow, Scikit-learn) as well as familiarity with data technologies (e.g., Hadoop, Spark). About Vixio: Our mission is to empower businesses to efficiently manage and meet their regulatory obligations with our unique combination of human expertise and Regulatory Technology (RegTech) SaaS More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize More ❯
teams to build scalable data pipelines and contribute to digital transformation initiatives across government departments. Key Responsibilities Design, develop and maintain robust data pipelines using PostgreSQL and Airflow or ApacheSpark Collaborate with frontend/backend developers using Node.js or React Implement best practices in data modelling, ETL processes and performance optimisation Contribute to containerised deployments (Docker/… within Agile teams and support DevOps practices What We're Looking For Proven experience as a Data Engineer in complex environments Strong proficiency in PostgreSQL and either Airflow or Spark Solid understanding of Node.js or React for integration and tooling Familiarity with containerisation technologies (Docker/Kubernetes) is a plus Excellent communication and stakeholder engagement skills Experience working within More ❯