and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, ApacheSpark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (ApacheSpark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, ApacheSpark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (ApacheSpark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional More ❯
london (city of london), south east england, united kingdom
Vallum Associates
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, ApacheSpark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (ApacheSpark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional More ❯
Data Engineer - Azure Databricks , Apache Kafka Permanent Basingstoke (Hybrid - x2 PW) Circa £70,000 + Excellent Package Overview We're looking for a skilled Data Analytics Engineer to help drive the evolution of our clients data platform. This role is ideal for someone who thrives on building scalable data solutions and is confident working with modern tools such as … Azure Databricks , Apache Kafka , and Spark . In this role, you'll play a key part in designing, delivering, and optimising data pipelines and architectures. Your focus will be on enabling robust data ingestion and transformation to support both operational and analytical use cases. If you're passionate about data engineering and want to make a meaningful impact … in a collaborative, fast-paced environment, we want to hear from you !! Role and Responsibilities Designing and building scalable data pipelines using ApacheSpark in Azure Databricks Developing real-time and batch data ingestion workflows, ideally using Apache Kafka Collaborating with data scientists, analysts, and business stakeholders to build high-quality data products Supporting the deployment and More ❯
london, south east england, united kingdom Hybrid / WFH Options
Fruition Group
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Fruition Group
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using ApacheSpark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and maintain data pipelines and ETL processes using ApacheSpark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using ApacheSpark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and maintain data pipelines and ETL processes using ApacheSpark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
london (city of london), south east england, united kingdom
Capgemini
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using ApacheSpark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and maintain data pipelines and ETL processes using ApacheSpark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. YOUR ROLE We are looking for a skilled Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using ApacheSpark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. YOUR PROFILE Develop, optimize, and maintain data pipelines and ETL processes using ApacheSpark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross … functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and reliability. Write clean, maintainable, and well-documented code. Participate in code reviews, design discussions, and agile ceremonies. Implement data quality and governance best practices. Stay updated with More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in ApacheSpark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable More ❯
london (city of london), south east england, united kingdom
Mercuria
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in ApacheSpark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in ApacheSpark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable More ❯
create products that drive smarter decisions. You’ll play a key role in shaping our Lakehouse architecture. What you'll be doing: Building and optimising data ingestion pipelines using ApacheSpark (ideally in Azure Databricks) Designing semantic models and dashboards for business insights Collaborating across teams to understand requirements and deliver fit-for-purpose data products Supporting the … great attention to detail Experience in regulated industries like finance or insurance is a plus Bonus points for: Real-time data pipeline experience Azure Data Engineer certifications Familiarity with Apache Kafka or unstructured data (e.g. voice) Ready to shape the future of data at Reassured? If you're excited by the idea of building smart, scalable solutions that make More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
driven decision-making. Responsibilities for the Senior Data Engineer: Design, build, and maintain scalable data pipelines and architectures, ensuring reliability, performance, and best-in-class engineering standards Leverage Databricks, Spark, and modern cloud platforms (Azure/AWS) to deliver clean, high-quality data for analytics and operational insights Lead by example on engineering excellence, mentoring junior engineers and driving … customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and ApacheSpark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
london, south east england, united kingdom Hybrid / WFH Options
Tenth Revolution Group
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Tenth Revolution Group
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Tenth Revolution Group
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
guildford, south east england, united kingdom Hybrid / WFH Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to ApacheSpark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to ApacheSpark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
london (city of london), south east england, united kingdom
Humanoid
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯