platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
london (city of london), south east england, united kingdom
Mastek
platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure … Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, SparkSQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and cost. Need to program using the languages such as SQL, Python, R, YAML and JavaScript Data Integration: Integrate data from various sources … practices. Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage More ❯
ensure high availability and accessibility Requirements Minimum of 8 years of experience in data engineering At least 5 years of hands-on experience with Azure data services (ApacheSpark, Azure Data … Factory, Synapse Analytics, RDBMS such as SQL Server) Proven leadership and management experience in data engineering teams Proficiency in PySpark, Python (with Pandas), T-SQL, SparkSQL, and experience with CI/CD pipelines Strong understanding of data modeling, ETL processes, and data warehousing concepts Knowledge of version control systems like Git Excellent problem-solving, analytical, communication … manage multiple projects and meet deadlines Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Nice to Have Experience with Scala for ApacheSpark Knowledge of other cloud platforms like AWS or GCP Our Benefits Include Group pension plan, life assurance, income protection, and critical illness cover Private medical insurance and dental care More ❯
comprehensive logging, monitoring, and alerting tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes using Azure Databricks, ensuring audit … ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS Gen2 encryption for audit compliance. Development and Process Improvement Evaluate requirements, create technical design documentation … Metadata Driven Platform, Event-driven architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage, Quality More ❯
ideas and solutions to improve our data infrastructure and capabilities. What is needed to succeed: Technical skills: Problem-solving team player with an analytical mind. Strong knowledge of SQL and Spark SQL. Understanding of dimensional data modelling concepts. Experience with Azure Synapse Analytics. Understanding of streaming data ingestion processes. Ability to develop/manage ApacheSparkMore ❯
Design and deliver quality solutions independently • Leading a team of Data Engineers and deliver solutions as a team Key skills/knowledge/experience: • Proficient in PySpark, Python, SQL with atleast 5 years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark … customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases. • Proficient in SQL (SparkSQL preferred). • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is More ❯
be part of a team that's transforming how data powers retail, this is your opportunity. Your Role (Key Responsibilities) Design, build, and optimise robust data pipelines using PySpark, SparkSQL, and Databricks to ingest, transform, and enrich data from a variety of sources. Translate business requirements into scalable and performant data engineering solutions, working closely with squad members and stakeholders. … Community, ensuring standards, practices and principles are built into everything you do. About You (Experience & Qualifications) Experience building and maintaining data solutions. Experience with languages such as Python, SQL and, preferably, PySpark. Hands-on experience with Microsoft Azure data services, including Azure Data Factory, and a good understanding of cloud-based data architectures (e.g., data lakes, data vaults More ❯
service support for datasets. • Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. BASIC QUALIFICATIONS - Experience with SQL - 1+ years of data engineering … experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL More ❯
experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL More ❯
requires security clearance at SC level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Engineering: Ingestion configuration. Write Python/PySpark and SparkSQL code for validation/curation in notebook. Create data integration test cases. Implement or amend worker pipelines. Implement data validation/curation rules. Convert data model … Apps, AKS, Azure App Service, Web App. Good knowledge of real-time streaming applications, preferably with experience in Kafka real-time messaging or Azure Functions, Azure Service Bus. Spark processing and performance tuning. File formats partitioning (e.g., Parquet, JSON, XML, CSV). Azure DevOps/GitHub. Hands-on experience in at least one of Python with knowledge of More ❯
analysis, testing, and release management. Understanding of Agile methodologies, CI/CD tools, and full software development lifecycle. Proficiency with Azure Databricks, Data Factory, Storage, Key Vault, Git, SQL (SparkSQL), and Python (PySpark). Certifications: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900). Curiosity about technology, adaptability, agility, and a collaborative More ❯
London, England, United Kingdom Hybrid / WFH Options
Skipton Building Society
development lifecycle understanding Agile methodologies experience Experience with CI/CD tools Key Technologies: Azure Databricks, Data Factory, Storage, Key Vault Source control experience (e.g., Git) Proficiency in SQL (SparkSQL) and Python (PySpark) Certifications: Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Curiosity about technology and adaptability Agile mindset, optimism More ❯
lead projects on your own. Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using data manipulation and machine learning … libraries in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling & Statistical Analysis experience, ideally customer related More ❯
the heart of the business. The role This role sits within the Group Enterprise Systems (GES) Technology team. The ideal candidate is an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) capable of working independently and within a team to deliver enterprise-class data warehouse solutions and analytics platforms. The role involves working on Actuarial Reserving systems … models. Build and maintain automated pipelines supporting BI and analytics use cases, ingesting, transforming, and loading data from multiple sources, structured and unstructured. Utilise Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies, and explore other solutions where appropriate. Develop patterns, best practices, and standardized data pipelines to ensure consistency across the organisation. Essential Core Technical … Experience 5 to 10+ years of experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing and stored procedure experience. Experience developing ETL solutions in SQL Server, including SSIS & T-SQL. Experience with Microsoft BI technologies (SQL Server Management Studio, SSIS, SSAS, SSRS). Knowledge of data/ More ❯
AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player More ❯
City of London, London, United Kingdom Hybrid / WFH Options
La Fosse
AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player More ❯
london, south east england, united kingdom Hybrid / WFH Options
La Fosse
AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player More ❯
South East London, England, United Kingdom Hybrid / WFH Options
La Fosse
AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, SparkSQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player More ❯
Principal Data Engineer. We’re looking for someone who has these abilities and skills: Well established Data & Analytics work experience. Sound understanding/experience of Python, Databricks, PySpark, SparkSQL and best practices. Expertise in Star Schema data modelling. Expertise in the design, creation and management of large datasets/data models. Experience working on building More ❯
years of data engineering experience. Experience with data modeling, warehousing, and building ETL pipelines. Experience with query languages such as SQL, PL/SQL, HiveQL, SparkSQL, or Scala. Experience with scripting languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. PREFERRED QUALIFICATIONS Experience with big data … technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. We promote an inclusive culture that empowers Amazon employees to deliver the best results for our customers. If you have a disability and require workplace accommodations during the application, hiring, interview, or onboarding process, please visit this link for more More ❯
experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL More ❯
the data infrastructure and maintain a highly scalable, reliable and efficient data system to support the fast growing business. You will work with analytic tools, can write excellent SQL scripts, optimize performance of SQL queries and can partner with internal customers to answer key business questions. We look for candidates who are self-motivated, flexible, hardworking … experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Knowledge of AWS Infrastructure - Knowledge of writing and optimizing SQL queries in a business environment with large More ❯
Strong communication and collaboration skills across teams. Key Technologies (awareness of) Azure Databricks, Data Factory, Storage, Key Vault Source control systems, such as Git dbt (Data Build Tool), SQL (SparkSQL), Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSP Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What More ❯