in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricksincluding experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline components. Strong experience More ❯
in Computer Science, Software Engineering, or equivalent technical discipline. 8+ years of hands-on experience building large-scale distributed data pipelines and architectures. Expert-level knowledge in Apache Spark, PySpark, and Databricksincluding experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows. Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline components. Strong experience More ❯
to logically analyse complex requirements, processes, and systems to deliver solutions. Solid understanding of data modelling, ETL/ELT processes, and data warehousing. Proficiency in SQL and Python (especially PySpark), as well as other relevant programming languages. Passion for using data to drive key business decisions. Skills wed love to see/Amazing Extras: Experience with Microsoft Fabric. Familiarity More ❯
Azure cloud data lakes and services (Data Factory, Synapse, Databricks). Skilled in ETL/ELT pipeline development and big data tools (Spark, Hadoop, Kafka). Strong Python/PySpark programming and advanced SQL with query optimisation. Experience with relational, NoSQL, and graph databases. Familiar with CI/CD, version control, and infrastructure as code (Terraform). Strong analytical More ❯
City of London, Greater London, UK Hybrid / WFH Options
Formula Recruitment
Azure cloud data lakes and services (Data Factory, Synapse, Databricks). Skilled in ETL/ELT pipeline development and big data tools (Spark, Hadoop, Kafka). Strong Python/PySpark programming and advanced SQL with query optimisation. Experience with relational, NoSQL, and graph databases. Familiar with CI/CD, version control, and infrastructure as code (Terraform). Strong analytical More ❯
hear from you! Technical skills Python (Competent) React frontend development (Intermediate/Competent) Dotnet (Beginner) Cloud technologies (Beginner/Intermediate), ideally Azure ETL pipelines and data tools such as PySpark, Databricks CI/CD tools (Beginner/Intermediate), e.g., Azure DevOps Cross-functional skills Problem-solving mindset with strong analytical abilities Collaborative team spirit & great communication skills Adaptability ready More ❯
drive results. What were looking for: Proven experience in Data Architecture and data modelling. Strong skills in Microsoft Azure tools (Fabric, OneLake, Data Factory). Confident with Python/PySpark and relational databases. Hands-on ETL/ELT experience. A problem-solver with a positive, can-do attitude. Bonus points if you bring: Tableau, Power BI, SSAS, SSIS or More ❯
proactive problem-solver with strong analytical and communication skills, capable of working both independently and collaboratively. Desirable: Experience working with Azure (or other major cloud platforms). Familiarity with PySpark or other big data technologies. Understanding of version control systems (e.g., Git). Knowledge of pricing or modelling workflows and the impact of engineering decisions on model performance. Why More ❯
City of London, Greater London, UK Hybrid / WFH Options
Ascentia Partners
proactive problem-solver with strong analytical and communication skills, capable of working both independently and collaboratively. Desirable: Experience working with Azure (or other major cloud platforms). Familiarity with PySpark or other big data technologies. Understanding of version control systems (e.g., Git). Knowledge of pricing or modelling workflows and the impact of engineering decisions on model performance. Why More ❯
AIP adoption and improve automation What Youll Bring Experience as a Data & AI Engineer Hands-on experience with Palantir Foundry (data integration, ontology, pipelines, applications) Strong skills in Python, PySpark, SQL , and data modelling Practical experience with AIP features (RAG workflows, copilots, agent-based apps) Ability to work independently and engage with non-technical stakeholders Strong problem-solving mindset More ❯
to measure performance against business goals Deliver training sessions for the Data Science community in your specialist area TECHNICAL SKILLS REQUIRED Have strong programming skills in Python, SQL and PySpark Have a novice knowledge level of the Data Science Toolbox (i.e. the fundamentals of Mathematics and Statistics, computer programming, Data Ingestion, Data Munging, Data visualisation, Machine Learning, Optimisation, Simulation More ❯
Factory for data ingestion and transformation Work with Azure Data Lake, Synapse, and SQL DW to manage large volumes of data Develop data transformation logic using SQL, Python, and PySpark code Collaborate with cross-functional teams to translate business requirements into data solutions Create mapping documents, transformation rules, and ensure quality delivery Contribute to DevOps processes, CI/CD … development and big data solutions Recent experience within Insurance Technology essential Solid expertise with Azure Databricks, Data Factory, ADLS, Synapse, and Azure SQL Strong skills in SQL, Python, and PySpark Solid understanding of DevOps, CI/CD, and Agile methodologies Excellent communication and stakeholder management skills More ❯
Factory for data ingestion and transformation Work with Azure Data Lake, Synapse, and SQL DW to manage large volumes of data Develop data transformation logic using SQL, Python, and PySpark code Collaborate with cross-functional teams to translate business requirements into data solutions Create mapping documents, transformation rules, and ensure quality delivery Contribute to DevOps processes, CI/CD … development and big data solutions Recent experience within Insurance Technology essential Solid expertise with Azure Databricks, Data Factory, ADLS, Synapse, and Azure SQL Strong skills in SQL, Python, and PySpark Solid understanding of DevOps, CI/CD, and Agile methodologies Excellent communication and stakeholder management skills More ❯
ll architect and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to … reviews, and foster a culture of continuous learning and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL More ❯
Role Sr. Technology Architect Technology Data Modelling, ERWIN Modelling, Azure Architect, PySpark Location UK(London) Job Description Data Architect should have key understanding of building Data Models, Experience in Data Architecting and Engineering space on Spark, Pyspark, ADF, Azure Synapse and Data Lake. Your role In the role of a Sr Technology Architect, you will primarily be responsible … Hand on experience in technology consulting, enterprise and solutions architecture and architectural frameworks, Data Modelling, Experience on ERWIN Modelling. Hands on experience in ADF, Azure Databricks, Azure Synapse, Spark, PySpark, Python/Scala, SQL. Hands on Experience in Designing and building Data Lake from Multiple Source systems/Data Providers. Experience on Data Modelling, architecture, implementation & Testing. Experienced in More ❯
a background in the insurance domain (Life, Health, Home, Compensation). What youll do... Design, build, and own Palantir Foundry data pipelines end-to-end. Write production-grade Python, PySpark, and SQL for ELT on large distributed datasets. Implement warehousing patterns with strong data quality checks and lineage. Translate insurance requirements into robust data products via Foundry Ontology. Diagnose … engineers to deliver. What were looking for... Strong experience in data engineering for large-scale distributed systems. Hands-on development experience with Palantir Foundry - this is a must Deep PySpark + Spark SQL skills Strong ELT/warehouse fundamentals and CI/CD mindset. Comfortable in Agile teams; clear communicator who owns outcomes. BONUS; experience in the insurance or More ❯