City of London, London, United Kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hexegic
and validate data models and outputs Set up monitoring and ensure data health for outputs What we are looking for Proficiency in Python, with experience in Apache Spark and PySpark Previous experience with data analytics softwares Ability to scope new integrations and translate user requirements into technical specifications What’s in it for you? Base salary of More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Ada Meher
London offices 1-2 days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering More ❯
City of London, London, England, United Kingdom Hybrid / WFH Options
Ada Meher
London offices 1-2 days a week – based business need. To Be Considered: Demonstrable expertise and experience working on large-scale Data Engineering projects Strong experience in Python/PySpark, Databricks & Apache Spark Hands on experience with both batch & streaming pipelines Strong experience in AWS and associated tooling (Eg, S3, Glue, Redshift, Lambda, Terraform etc) Experience designing Data Engineering More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fynity
stack — you’ll be empowered to design solutions using the most appropriate technologies, deploying final implementations on AWS. You’ll bring hands-on depth and knowledge in: Languages: Python, PySpark, SQL Technologies: Spark, Airflow Cloud: AWS (API Gateway, Lambda, Redshift, Glue, CloudWatch, etc.) Data Pipelines: Designing and building modern, cloud-native pipelines using AWS services In addition, you will More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Peaple Talent
Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as Apache Spark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications What’s in it More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Peaple Talent
Google Cloud Platform (GCP) Strong experience designing and delivering data solutions using BigQuery Proficient in SQL and Python Experience working with Big Data technologies such as Apache Spark or PySpark Excellent communication skills, with the ability to engage effectively with senior stakeholders Nice to haves: GCP Data Engineering certifications BigQuery or other GCP tool certifications What’s in it More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Ascentia Partners
proactive problem-solver with strong analytical and communication skills, capable of working both independently and collaboratively. Desirable: Experience working with Azure (or other major cloud platforms). Familiarity with PySpark or other big data technologies. Understanding of version control systems (e.g., Git). Knowledge of pricing or modelling workflows and the impact of engineering decisions on model performance. Why More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Ascentia Partners
proactive problem-solver with strong analytical and communication skills, capable of working both independently and collaboratively. Desirable: Experience working with Azure (or other major cloud platforms). Familiarity with PySpark or other big data technologies. Understanding of version control systems (e.g., Git). Knowledge of pricing or modelling workflows and the impact of engineering decisions on model performance. Why More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fuse Group
Azure DevOps. Required Skills & Experience Strong expertise in Power BI, DAX, SQL, and data architecture. Proven experience with Azure Data Factory, Fabric Lakehouse/Warehouse, and Synapse. Proficiency in PySpark, T-SQL, and data transformation using Notebooks. Familiarity with DevOps, ARM/Bicep templates, and semantic modelling. Experience delivering enterprise-scale BI/data warehouse programmes. Exposure to Microsoft More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Fuse Group
Azure DevOps. Required Skills & Experience Strong expertise in Power BI, DAX, SQL, and data architecture. Proven experience with Azure Data Factory, Fabric Lakehouse/Warehouse, and Synapse. Proficiency in PySpark, T-SQL, and data transformation using Notebooks. Familiarity with DevOps, ARM/Bicep templates, and semantic modelling. Experience delivering enterprise-scale BI/data warehouse programmes. Exposure to Microsoft More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Intelix.AI
Text-to-Cypher/SPARQL with safety filters and small eval sets. MCP-style tool contracts for safe agent access. Streaming/ELT at scale (Kafka/Databricks/PySpark). More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Intelix.AI
Text-to-Cypher/SPARQL with safety filters and small eval sets. MCP-style tool contracts for safe agent access. Streaming/ELT at scale (Kafka/Databricks/PySpark). More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Sanderson Recruitment
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
analytics efforts. Required Skills & Experience: 4–5 years of commercial experience in data science , preferably in eCommerce or marketing analytics. Strong hands-on experience with Databricks, SQL, Python, and PySpark ; knowledge of R and dashboarding tools is a plus. Proven experience with causal inference, MMM modelling, and experimentation . Strong analytical and problem-solving skills with the ability to More ❯
City of London, London, United Kingdom Hybrid / WFH Options
TechShack
basis for an initial 6-month assignment. The role involves very occasional travel to London, with flexibility to work remotely. Skills required: Azure platform experience Databricks, Azure Functions, and PySpark Strong data engineering and development background Active SC Clearance This role has a quick 1-hour interview process with a start date of 14th November . Azure DataBricks Developer More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
TechShack
basis for an initial 6-month assignment. The role involves very occasional travel to London, with flexibility to work remotely. Skills required: Azure platform experience Databricks, Azure Functions, and PySpark Strong data engineering and development background Active SC Clearance This role has a quick 1-hour interview process with a start date of 14th November . Azure DataBricks Developer More ❯
ll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to … reviews, and foster a culture of continuous learning and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL More ❯
london (city of london), south east england, united kingdom
Luxoft
ll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to … reviews, and foster a culture of continuous learning and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
Factory for data ingestion and transformation Work with Azure Data Lake, Synapse, and SQL DW to manage large volumes of data Develop data transformation logic using SQL, Python, and PySpark code Collaborate with cross-functional teams to translate business requirements into data solutions Create mapping documents, transformation rules, and ensure quality delivery Contribute to DevOps processes, CI/CD … development and big data solutions Recent experience within Insurance Technology essential Solid expertise with Azure Databricks, Data Factory, ADLS, Synapse, and Azure SQL Strong skills in SQL, Python, and PySpark Solid understanding of DevOps, CI/CD, and Agile methodologies Excellent communication and stakeholder management skills More ❯
london (city of london), south east england, united kingdom
Calibre Candidates
Factory for data ingestion and transformation Work with Azure Data Lake, Synapse, and SQL DW to manage large volumes of data Develop data transformation logic using SQL, Python, and PySpark code Collaborate with cross-functional teams to translate business requirements into data solutions Create mapping documents, transformation rules, and ensure quality delivery Contribute to DevOps processes, CI/CD … development and big data solutions Recent experience within Insurance Technology essential Solid expertise with Azure Databricks, Data Factory, ADLS, Synapse, and Azure SQL Strong skills in SQL, Python, and PySpark Solid understanding of DevOps, CI/CD, and Agile methodologies Excellent communication and stakeholder management skills More ❯
ll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to … reviews, and foster a culture of continuous learning and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL More ❯
Role – Sr. Technology Architect Technology – Data Modelling, ERWIN Modelling, Azure Architect, PySpark Location – UK(London) Job Description Data Architect should have key understanding of building Data Models, Experience in Data Architecting and Engineering space on Spark, Pyspark, ADF, Azure Synapse and Data Lake. Your role In the role of a Sr Technology Architect, you will primarily be responsible … in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. 7+ years of hands-on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. Experience on Data Modelling, architecture, implementation & Testing. Experienced in More ❯