Liverpool office 2-3 days a week - rest remote* Senior Data Engineer, Data, Datamodelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. *Informatica (IICS & IDMC) is essential* A top Reinsurance firm are looking for a Senior/Lead … cloud and tooling - Informatica and Azure are highly desired. Data Engineer, Data, Datamodelling, Migration, ETL, ETL Tooling ETL (Informatica IICS & IDMC) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
South Harting, England, United Kingdom Hybrid / WFH Options
Adecco
learning, probability, statistics, and quantitative risk modelling. High proficiency in Python and SQL. Experience with big data technologies and tools such as Databricks and PySpark is highly desirable. Essential experience in Probabilistic Risk Modelling. Highly desirable experience with Monte Carlo, Capula, Gamma, Statistical modelling, financial modelling, and Stochastic modelling. more »
the Financial Services with experience in..... Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs Python, Pyspark, Azure, Agile, MetaBase, then please apply. You can 📧 your cv to matt@hawksworthuk.com or message me on LinkedIn. Ideally you'll have plenty of more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
incorrect or not received on time. outages with the end users of a data pipeline What We Value reading and writing code in Python, Pyspark and Java. understanding of Spark and interested in learning the basics of tuning Spark jobs. pipeline monitoring team members should be able to use more »
advice to analytical users on how they can access and utilise the new datasets. Qualities Comfortable with Python - ideally experience with Apache Spark and Pyspark Previous data analytics software experience Able to scope new integrations and translate analytical user needs into technical requirement. UK based – data analytics system can more »
mostly data services, using AWS services to build data pipelines S3, Glue, Lambda, Redshift o Python - use extensively, need someone who ships code o PySpark - some services are written in o Airflow - use airflow from AWS o In summary - need a data engineer who is expert with AWS, with … a very strong background in Python/PySpark o In addition - they would like at least one of the resources between them to have experience with each of the following: AWS infra provisioning - cloud development kit, Terraform Docker or Kubernetes Databases and query optimization This engineering team supports marketing … datasets they’re bringing in, and hold responsibility for existing tools as well as building tools that don’t exist yet. Skills: Python Aws PySpark Lambda Glue Redshift S3 Terraform Docker Kubernetes Job Title: Data Engineer Location: London, UK Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Maxis more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
a week in the Liverpool office - rest remote**** Senior Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. **** Informatica Cloud (IICS & IDMC ) is essential - NOT PowerCenter**** A top insurance firm are looking for a … e.g. Scrum, SAFe) and tools (i.e. Jira, AzureDevOps). Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. Seniority Level Mid-Senior level Industry Insurance Financial Services Employment Type Full-time Job Functions Information more »
a week in the Liverpool office - rest remote**** Senior Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. **** Informatica Cloud (IICS & IDMC ) is essential - NOT PowerCenter**** A top insurance firm are looking for a … e.g. Scrum, SAFe) and tools (i.e. Jira, AzureDevOps). Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. Seniority Level Mid-Senior level Industry Insurance Financial Services Employment Type Full-time Job Functions Information more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
can offer you exposure to the latest technologies. We are looking for a senior Data Engineer who has solid Python skills as well as Pyspark, DataBricks and SQL, as well as Data Modeling, and Azure Data Factors . Azure Devops would be a distinct advantage. Strong communication and business more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
GitHub for version control, you will champion DevOps practices to ensure seamless collaboration and automation across the data engineering lifecycle. Your proficiency in SQL, PySpark, and Python will be helpful in transforming raw data into valuable insights, while your familiarity with Kafka will enable real-time data processing capabilities. … responsibilities: Lead the design, development, and maintenance of Azure-based data pipelines and analytical solutions using Databricks, Synapse, and other relevant services. Leverage SQL, PySpark, and Python to perform data transformations, aggregations, and analysis on large datasets. Architect data storage solutions using Azure SQL Database, Azure Data Lake Storage more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
of this is a strong preference. However other Cloud platforms like AWS/GCP are acceptable. • Coding Languages - Experience using Python with data (Pandas, PySpark) would be an advantage. Other languages such as C# would be beneficial but not essential. Their lovely offices are based in the West Midlands more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
systems and offer improvements that will help reduce technical/code/engineering debt. Key Skills: Extensive experience with Machine Learning and Spark/PySpark Recommendation systems, pattern recognition, data mining, artificial intelligence Modern Parallel Computing; distributed clusters, multicore servers, GPU’s Experience with developing machine learning models at more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Skills: Spark Architecture – component understanding around Spark … Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able … are Cluster level failures. Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline and CodeBuild more »
and industry standards for the organization. Strong experience on Azure cloud services like Azure, ADF, ADLS, Synapse Proficiency in querying languages such as SQL, Pyspark, Python and familiarity with data visualization tools (e.g. Power BI). Strong communication skills to gather the business requirements from stakeholder and propose best more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »