start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
scientific Python toolset. Our tech stack includes Airbyte for data ingestion, Prefect for pipeline orchestration, AWS Glue for managed ETL, along with Pandas and PySpark for pipeline logic implementation. We utilize Delta Lake and PostgreSQL for data storage, emphasizing the importance of data integrity and version control in our … testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with a more »
a week in the Liverpool office - rest remote**** Senior Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. **** Informatica Cloud (IICS & IDMC ) is essential - NOT PowerCenter**** A top insurance firm are looking for a … e.g. Scrum, SAFe) and tools (i.e. Jira, AzureDevOps). Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. Seniority Level Mid-Senior level Industry Insurance Financial Services Employment Type Full-time Job Functions Information more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
Manchester Area, United Kingdom Hybrid / WFH Options
Vermelo RPO
between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at more »
such as, Code Repo, Code Workbook, Pipeline Build, migration techniques, Data Connection and Security setup. Design, develop Data Pipelines, and have excellent skills in PySpark and Spark SQL, hands on with code Build and deployment in Palantir. Must lead a team of 6-7 technical associates with PySparkmore »
PowerBi would also be useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Stream Analytics, etc. Strong Python or Scala with Spark, PySpark experience Experience with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards, Data more »
London (Hybrid working) and is paying up to £100,000 per annum Key Skills Required Tools - Data Factory/Data Bricks/Python/PySpark/SQL/Power BI Commercial experience of Kimball/Inmon Datamodelling Knowledge of London Market Insurance is Highly Desirable Experience of Synapse beneficial more »
data experience Data inventory and data familiarisation Efficient data ingestion and ingestion pipelines Data cleaning and transformation Databricks (ideally with Unity Catalog) Python and PySpark CI/CD (ideally with Azure Devops) Unit testing (PyTest) If you have the above experience and are looking for a new contract role more »
data experience Data inventory and data familiarisation Efficient data ingestion and ingestion pipelines Data cleaning and transformation Databricks (ideally with Unity Catalog) Python and PySpark CI/CD (ideally with Azure Devops) Unit testing (PyTest) If you have the above experience and are looking for a new contract role more »
Python, PySpark, AWS, Oracle, Kafka, Banking Become a key member of an agile team designing and delivering a market-leading secure and scalable reporting product. The team is migrating from Oracle to AWS cloud so experience with either is nice to have. You serve as a seasoned member of … candidates able to work under these requirements without visa sponsorship will be able to be considered at this time. Required skills: - Either Python or PySpark experience. Ideally, you will be working as either a Python Developer (3+ years) or Data Engineer using PySpark - Either AWS or Oracle. Candidates more »
South East London, London, United Kingdom Hybrid / WFH Options
The Bridge (IT Recruitment) Limited
inside IR35 on a remote basis for an energy provider. The key skills required for this Senior Data Engineer role are: Azure Python Databricks Pyspark If you do have the required skills for this remote Senior Data Engineer contract, please do apply. more »
requirements and deliver solutions that drive business value. Requirements: 7+ years in a Data Engineering Role Excellent proficiency in SQL, Python, Microsoft Azure, Databricks, PySpark, Experience managing a team Details: Start Date: ASAP Duration: 3 months, option for permanent extension Day rate: Up to £400Ltd, depending on experience Annual more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
and ensuring best practices and understood and followed. Technical Skills and Qualifications Expert knowledge in python including libraries/frameworks such as pandas, numpy, pyspark Good understanding of OOP, software design patterns, and SOLID principles Good experience in Docker Good experience in Linux Good experience in Airflow Good knowledge more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Blue Wolf Digital
to join their team. The primary focus of the role is Databricks data engineering. You will be building data pipelines using Databricks, coding using PySpark, and supporting internal applications. You will also be using Python for Data Transformations and work across the Azure Data Platform. Must Have Strong Databricks more »
requirements and design scalable and efficient solutions to meet business needs. Implement data pipelines, ETL processes, and data transformations Strong experience with Azure Databricks, PySpark, and Python Integrate diverse data sources and formats, including structured and unstructured data, streaming data, and APIs. Technical skills: Strong proficiency in Python Extensive more »
snowflake schemas. Knowledge of DevOps practices within a Power BI environment. Familiarity with Microsoft Fabric & Databricks. SQL databases expertise, data engineering with Python and PySpark, and knowledge of geospatial concepts and tools. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You more »
building a massively distributed cloud-hosted data platform that would be used by the entire firm. Tech Stack: Python/Java, Dremio, Snowflake, Iceberg, PySpark, Glue, EMR, dbt and Airflow/Dagster. Cloud: AWS, Lambdas, ECS services This role would focus on various areas of Data Engineering including: End more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management HOW TO APPLY Please register your interest by sending your CV to Kiran Ramasamy more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »