handling to support monitoring and incident response. ? Align implementations with InfoSum's privacy, security, and compliance practices. Required Skills and Experience: ? Proven experience with Apache Spark (Scala, Java, or PySpark), including performance optimization and advanced tuning techniques. ? Strong troubleshooting skills in production Spark environments, including diagnosing memory usage, shuffles, skew, and executor behavior. ? Experience deploying and managing Spark jobs More ❯
Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance More ❯
Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance More ❯
we would like to discuss with you. Please note this role requires onsite attendance once a week and has been deemed inside IR35. Requirement: Experience in Azure Synapse, ETL, Pyspark, SQL , data Modelling and Data Bricks. Design and develop Azure Pipelines including data transformation and data cleansing Document source-to-target mappings Re-engineer manual data flows to enable More ❯
data architecture, data modelling, and big data platforms. Proven expertise in Lakehouse Architecture, particularly with Databricks. Hands-on experience with tools such as Azure Data Factory, Unity Catalog, Synapse, PySpark, Power BI, SQL Server, Cosmos DB, and Python. In-depth knowledge of data governance frameworks and best practices. Solid understanding of cloud-native architectures and microservices in data environments. More ❯
join them ASAP. They are looking for someone with a strong background in data engineering in a production environment, including testing and running production grade pipelines. Essential: Kubernetes PythonPySpark Docker CI/CD Git Testing Preferably Argo Workflow Data Engineer I SC Cleared I Remote I PythonMore ❯
Central London, London, United Kingdom Hybrid / WFH Options
iDPP
someone who enjoys building scalable data solutions while staying close to business impact. The Role As a Data Analytics Engineer , youll design, build, and maintain reliable data pipelinesprimarily using PySpark, SQL, and Python to ensure business teams (analysts, product managers, finance, operations) have access to well-modeled, actionable data. Youll work closely with stakeholders to translate business needs into … spend more time coding, managing data infrastructure, and ensuring pipeline reliability. Who Were Looking For Data Analytics : Analysts who have strong experience building and maintaining data pipelines (particularly in PySpark/SQL ) and want to work on production-grade infrastructure. Data Engineering : Engineers who want to work more closely with business stakeholders and enable analytics-ready data solutions. Analytics … Professionals who already operate in this hybrid space, with proven expertise across big data environments, data modeling, and business-facing delivery. Key Skills & Experience Strong hands-on experience with PySpark, SQL, and Python Proven track record of building and maintaining data pipelines Ability to translate business requirements into robust data models and solutions Experience with data validation, quality checks More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Do Build & optimise recommendation/personalisation models . Drive incremental targeting beyond repeat-purchase patterns. Apply predictive analytics to customer behaviour & purchase history. Use Python (essential), SQL , and ideally PySpark to deliver insights. Collaborate with Product, Content, and Data Science to align models with business goals. Translate data into clear, actionable insights. (Bonus) Explore AI-driven ad content opportunities. … What We're Looking For Proven experience with predictive modelling/recommender systems . Strong Python & SQL skills (essential). Exposure to PySpark (desirable). Strong communicator with ability to link data to business outcomes. (Bonus) Experience with Generative AI or content automation. More ❯