/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
london (city of london), south east england, united kingdom
Mercuria
/MS in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricks—including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and Apache Spark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing and More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Bondaval
Science (or similar) from a good University highly desirable. Nice to Have: Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, Apache Flink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Understanding of infrastructure and DevOps (Terraform, Ansible, AWS, Kubernetes). Exposure to More ❯
london, south east england, united kingdom Hybrid / WFH Options
Bondaval
Science (or similar) from a good University highly desirable. Nice to Have: Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, Apache Flink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Understanding of infrastructure and DevOps (Terraform, Ansible, AWS, Kubernetes). Exposure to More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Bondaval
Science (or similar) from a good University highly desirable. Nice to Have: Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, Apache Flink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Understanding of infrastructure and DevOps (Terraform, Ansible, AWS, Kubernetes). Exposure to More ❯
frameworks, and clear documentation within your pipelines Experience in the following areas is not essential but would be beneficial: Data Orchestration Tools: Familiarity with modern workflow management tools like Apache Airflow, Prefect, or Dagster Modern Data Transformation: Experience with dbt (Data Build Tool) for managing the transformation layer of the data warehouse BI Tool Familiarity : An understanding of how More ❯
frameworks, and clear documentation within your pipelines Experience in the following areas is not essential but would be beneficial: Data Orchestration Tools: Familiarity with modern workflow management tools like Apache Airflow, Prefect, or Dagster Modern Data Transformation: Experience with dbt (Data Build Tool) for managing the transformation layer of the data warehouse BI Tool Familiarity : An understanding of how More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines and More ❯
of data modelling and data warehousing concepts Familiarity with version control systems, particularly Git Desirable Skills: Experience with infrastructure as code tools such as Terraform or CloudFormation Exposure to Apache Spark for distributed data processing Familiarity with workflow orchestration tools such as Airflow or AWS Step Functions Understanding of containerisation using Docker Experience with CI/CD pipelines and More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
london (city of london), south east england, united kingdom
Vallum Associates
and contribute to technical roadmap planning Technical Skills: Great SQL skills with experience in complex query optimization Strong Python programming skills with experience in data processing libraries (pandas, NumPy, Apache Spark) Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, Apache Kafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or similar) Experience with containerization technologies (Docker, Kubernetes) Experience with data orchestration tools (Apache Airflow or Dagster) Understanding of data warehousing concepts and dimensional modelling More ❯
of automation IT WOULD BE NICE FOR THE SENIOR SOFTWARE ENGINEER TO HAVE. Cloud based experience Microservice architecture or server-less architecture Big Data/Messaging technologies such as Apache Nifi/MiNiFi/Kafka TO BE CONSIDERED. Please either apply by clicking online or emailing me directly to For further information please call me on 07704 152 640. More ❯
london, south east england, united kingdom Hybrid / WFH Options
BondAval
hands-on when needed. Nice to haves Experience leading technical discovery or architecture definition in a scaling SaaS or fintech environment. Familiarity with event-driven or streaming architectures (Kafka, Apache Flink, etc.). Practical exposure to AI/LLM orchestration frameworks or fine-tuning workflows. Experience designing developer tools, data platforms, or intelligent systems. Interest in or experience mentoring More ❯
East London, London, United Kingdom Hybrid / WFH Options
InfinityQuest Ltd,
Hybrid) (3 days onsite & 2 days remote) Role Type: 6 Months Contract with possibility of extensions Mandatory Skills : Python, Postgres SQL, Azure Databricks, AWS(S3), Git, Azure DevOps CICD, Apache Airflow, Energy Trading experience Job Description:- Data Engineer (Python enterprise developer): 6+ years of experience in python scripting. Proficient in developing applications in Python language. Exposed to python-oriented More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
london (city of london), south east england, united kingdom
Humanoid
Science, Computer Science, or a related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and ability to work in a More ❯
record in full stack data development (from ingestion to visualization). Strong expertise in Snowflake, including data modeling, warehousing, and performance optimization. Hands-on experience with ETL tools (e.g., Apache Airflow, dbt, Fivetran) and integrating data from ERP systems like NetSuite. Proficiency in SQL, Python, and/or other scripting languages for data processing and automation. Familiarity with LLM More ❯