e.g. AWS, Azure. Good knowledge of Linux, it's development environments and tools Have experience in object-oriented methodologies, design patterns Understanding of Big Data technologies such as Hadoop, Spark Understanding of security implications and secure coding Proven grasp of software development lifecycle best-practices, agile methods, and conventions, including Source Code Management, Continuous Integration Practical experience with agile More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources, including relational databases, APIs, and … best practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Peaple Talent
delivered solutions in Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as ApacheSpark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications More ❯
delivered solutions in Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as ApacheSpark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications More ❯
monitoring processes to maintain data integrity and reliability. * Optimise data workflows for performance, cost-efficiency, and maintainability using tools such as Azure Data Factory, AWS Data Pipeline, Databricks, or ApacheSpark . * Integrate and prepare data for Tableau dashboards and reports , ensuring optimal performance and alignment with business needs. * Collaborate with visualisation teams to develop, maintain, and enhance More ❯
monitoring processes to maintain data integrity and reliability. * Optimise data workflows for performance, cost-efficiency, and maintainability using tools such as Azure Data Factory, AWS Data Pipeline, Databricks, or ApacheSpark . * Integrate and prepare data for Tableau dashboards and reports , ensuring optimal performance and alignment with business needs. * Collaborate with visualisation teams to develop, maintain, and enhance More ❯
and RESTful APIs. Proficiency with Kafka and distributed streaming systems. Solid understanding of SQL and data modeling. Experience with containerization (Docker) and orchestration (Kubernetes). Working knowledge of Flink, Spark, or Databricks for data processing. Familiarity with AWS services (ECS, EKS, S3, Lambda, etc.). Basic scripting in Python for automation or data manipulation. More ❯
technical stakeholders What You Bring 3–5+ years of experience in software engineering (Python) Experience with fastAPI, cloud platforms (AWS, Azure, or GCP), Docker. Bonus: experience with ML workflows, Spark, Airflow, or trading systems Why Join Up to £140K Up to 100% bonus Relocation package Flat, entrepreneurial team structure More ❯
technical stakeholders What You Bring 3–5+ years of experience in software engineering (Python) Experience with fastAPI, cloud platforms (AWS, Azure, or GCP), Docker. Bonus: experience with ML workflows, Spark, Airflow, or trading systems Why Join Up to £140K Up to 100% bonus Relocation package Flat, entrepreneurial team structure More ❯
modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Executive Facilities
domains. Proficiency in SQL for data extraction, transformation, and pipeline development. Experience with dashboarding and visualization tools (Tableau, Qlik, or similar). Familiarity with big data tools (Snowflake, Databricks, Spark) and ETL processes. Useful experience; Python or R for advanced analytics, automation, or experimentation support. Knowledge of statistical methods and experimentation (A/B testing) preferred. Machine learning and More ❯
with AWS data platforms and their respective data services. Solid understanding of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies (e.g., Spark, Hadoop) and distributed computing. Proficiency in SQL and at least one programming language (e.g., Python, Java) 6 Month Contract Inside IR35 Immediately available London up to 2 times a More ❯
pace with evolving technologies and techniques Candidate Profile 1+ years’ experience in a data-related role (Analyst, Engineer, Scientist, Consultant, or Specialist) Experience with technologies such as Python, SQL, Spark, Power BI, AWS, Azure, or GCP Strong analytical and problem-solving skills Comfortable working directly with clients and stakeholders Excellent communication and teamwork abilities Must hold active SC or More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
pace with evolving technologies and techniques Candidate Profile 1+ years’ experience in a data-related role (Analyst, Engineer, Scientist, Consultant, or Specialist) Experience with technologies such as Python, SQL, Spark, Power BI, AWS, Azure, or GCP Strong analytical and problem-solving skills Comfortable working directly with clients and stakeholders Excellent communication and teamwork abilities Must hold active SC or More ❯
or Digital Experience). Strong SQL skills for data extraction, transformation, and pipeline development. Proficiency in Tableau, Qlik, or similar visualization tools. Experience with big data tools (Snowflake, Databricks, Spark) and ETL processes. Exposure to Python or R for automation, experimentation, or analytics. Excellent communication and storytelling skills with both technical and non-technical audiences. Proactive, growth-oriented mindset More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Empresaria Group plc
or Digital Experience). Strong SQL skills for data extraction, transformation, and pipeline development. Proficiency in Tableau, Qlik, or similar visualization tools. Experience with big data tools (Snowflake, Databricks, Spark) and ETL processes. Exposure to Python or R for automation, experimentation, or analytics. Excellent communication and storytelling skills with both technical and non-technical audiences. Proactive, growth-oriented mindset More ❯
role involves structuring analytical solutions that address business objectives and problem solving. We are looking for hands-on experience in writing code for AWS Glue in Python, PySpark, and Spark SQL. The successful candidate will translate stated or implied client needs into researchable hypotheses, facilitate client working sessions, and be involved in recurring project status meetings. You will develop More ❯
deep learning architectures (e.g., attention models, transformers, retrieval models). Hands-on experience with LLMs and GenAI technologies. Strong programming and problem-solving skills with proficiency in Python, SQL, Spark, and Hive. Deep understanding of classical and modern ML techniques, A/B testing methodologies, and experiment design. Solid background in ranking, recommendation, and retrieval systems. Familiarity with large More ❯
deep learning architectures (e.g., attention models, transformers, retrieval models). Hands-on experience with LLMs and GenAI technologies. Strong programming and problem-solving skills with proficiency in Python, SQL, Spark, and Hive. Deep understanding of classical and modern ML techniques, A/B testing methodologies, and experiment design. Solid background in ranking, recommendation, and retrieval systems. Familiarity with large More ❯
tooling (RAG, fine-tuning, LLMs, agentic frameworks) Ability to bridge technical and commercial conversations confidently A logical, creative problem-solver who can turn data into ROI Nice-to-haves Spark/PySpark/Databricks or distributed data experience Familiarity with AWS (S3, EMR) or Hive-based environments Consulting or enterprise B2B experience Exposure to causal AI, agentic systems, or More ❯
cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, Delta Lake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design, and data lakehouse concepts. … Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog and Delta Live Tables. More ❯
cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, Delta Lake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design, and data lakehouse concepts. … Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog and Delta Live Tables. More ❯
cloud data platforms, Lakehouse architecture, and data engineering frameworks. Required Qualifications 6+ years of experience in data engineering 3+ years of hands-on experience with Databricks, Delta Lake, and Spark (PySpark preferred). Proven track record implementing Medallion Architecture (Bronze, Silver, Gold layers) in production environments. Strong knowledge of data modeling, ETL/ELT design, and data lakehouse concepts. … Proficiency in Python, SQL, and Spark optimization techniques. Experience working with cloud data platforms such as Azure Data Lake, AWS S3, or GCP BigQuery. Strong understanding of data quality frameworks, testing, and CI/CD pipelines for data workflows. Excellent communication skills and ability to collaborate across teams. Preferred Qualifications Experience with Databricks Unity Catalog and Delta Live Tables. More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
to create, test and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in ApacheSpark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base More ❯