a week in the Liverpool office - rest remote**** Senior Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. **** Informatica Cloud (IICS & IDMC ) is essential - NOT PowerCenter**** A top insurance firm are looking for a … e.g. Scrum, SAFe) and tools (i.e. Jira, AzureDevOps). Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. Seniority Level Mid-Senior level Industry Insurance Financial Services Employment Type Full-time Job Functions Information more »
South Harting, England, United Kingdom Hybrid / WFH Options
Adecco
learning, probability, statistics, and quantitative risk modelling. High proficiency in Python and SQL. Experience with big data technologies and tools such as Databricks and PySpark is highly desirable. Essential experience in Probabilistic Risk Modelling. Highly desirable experience with Monte Carlo, Capula, Gamma, Statistical modelling, financial modelling, and Stochastic modelling. more »
Maidstone, Kent, United Kingdom Hybrid / WFH Options
Harnham
INFOLEAD DATA ENGINEERREMOTEUP TO 75,000 + BENEFITSPERMANENTAre you an expert at using Pyspark and Databricks? Do you have prior experience working within the healthcare industry? This role is a fantastic opportunity to work with a large organisation on some massive and challenging projects.THE COMPANYThis company is a leading … journey.THE ROLEMigrating data onto Azure Platform.Implement legacy data into the new data platform.YOUR SKILLS AND EXPERIENCEHealthcare experience is preferable.Advanced knowledge and commercial experience using Pyspark and Databricks.Cloud Experience, preferably with Azure.Strong communication skills.THE BENEFITSA salary of up to 75,000.Fully remote.Excellent Healthcare.Great training/upskilling program.THE PROCESS1stStage - 1 hour more »
Engineer you will be pivotal in designing, developing, and maintaining data architecture and infrastructure. The ideal candidate should have a strong foundation in Python, PySpark, SQL, and ETL processes, along with proven experience in implementing solutions in a cloud environment. Roles & Responsibilities: Experienced Data Engineer with a background in more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
Bristol, England, United Kingdom Hybrid / WFH Options
Aviva
Skills and experience we’re looking for: Proven track record productionising data pipelines at scale Hands-on experience or understanding of: Python, Spark (e.g., PySpark, Spark tuning, Spark UI); Data Engineering technologies and principles (e.g., Data Lakes, Batch/Streaming); Cloud technologies and principles (e.g., Kubernetes, EMR, S3) Understanding more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
I have placed quite a few candidates with this organisation now, all have said glowing reviews about it, they're the leader in legal representation comparison and have rather interesting, unique data to work with. They are using modern Azure more »
with operational stakeholders propose how a machine learning model could benefit the process, and finally building the model to realise the benefit. Experience using PySpark to process large scale data would be advantageous, particularly use within the Databricks platform. Familiarity with insurance claims data is preferred but not essential. more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
with operational stakeholders propose how a machine learning model could benefit the process, and finally building the model to realise the benefit. Experience using PySpark to process large scale data would be advantageous, particularly use within the Databricks platform. Familiarity with insurance claims data is preferred but not essential. more »
performance, scalability, and reliability. Technical Skills required: RedShift Glue (inc. Glue Studio, Glue Data Quality, Glue DataBrew) Step Functions Athena Lambda Kinesis Python, Spark, Pyspark, SQL Your contributions as a Data Engineer will directly impact the organization's operations and revenue. In addition to a competitive annual salary, we more »
requiring 2-3 days onsite) and is paying up to £110,000 per annum Key Skills Strong commercial experience with Python/SQL/PySpark Knowledge of converting business requirements to engineering processes Azure environment - Data Bricks & Data Factory Industry experience with Insurance would be highly desirable The processing more »
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
DWP Digital
currently on-prem, but the direction of travel is cloud engineering, and you'll be executing code across the following tech stack: Azure, Databricks, PySpark and Pandas. DWP Digital is a great place to work, we offer a supportive and inclusive environment where you can grow your career and … make a real difference. Essential criteria: Commercial experience of Databricks, PySpark and Pandas Commercial experience of Azure data engineering tools such as Azure Data Factory, dedicated SQL pools and ADSL Gen 2 Experience of working with data lakes An understanding of dimensional modelling Details. Wages. Perks. You'll join more »
solutions in a production setting. Knowledge of developing real-time data stream systems (ideally Kafka). Proven track record in developing data systems using PySpark and Apache Spark for batch processing. Capable of managing data intake from various sources, including data streams, unstructured data, relational databases, and NoSQL databases. more »
Basingstoke, England, United Kingdom Hybrid / WFH Options
Blatchford
with various databases e.g. MS SQL, Azure Cosmos DB. Skilled at optimizing large and more complicated SQL statements. Proficiency in Python and experience with PySpark Experience using: CI/CD, Microsoft Azure, and Azure Dev Ops in an agile environment. Knowledge of Azure ETL services, i.e. Data Factory, Synapse more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
that are comfortable with terms like; Statistical Models, Computer Vision, Predictive Analytics, Data Visualization, Large Language Models (LLM), NLP, AI, Machine Learning, MLOPs, Python, Pyspark, and Azure. Flexible Working : The role is 60% work from home Sponsorship : Sadly sponsorship isn't available for these roles. About You: Bachelors Degree more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
MRJ Recruitment
experience working directly with C-Suite execs. Technical Expertise Needed At least 3 years Databrick s (must have) Proven experience delivering scalable data pipelines PySpark SQLDevOps/DataOps/CICD Expertise in designing, constructing, administering, and maintaining data warehouses and data lakes Data Modelling/Data Architecture Data Migration more »
/marketing but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
experience in Data Modeling within a cloud-based data platformStrong experience with SQL ServerAzure data engineering stack, including Azure Synapse and Azure Data LakePython, PySpark and T-SQL In return you will be offered a competitive salary and benefits package, remote working options and an opportunity to work with more »
experience as a Data Engineer. Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
up with a high performance Data Architecture Good to have the Retail functional Knowledge Must have Good Knowledge and Hands on experience in PythonPySpark ADF and ADB Good to have Knowledge in ADF CI CD Experience in designing architecting and implementing large scale data processing data storage data … delivery teams Supporting business development and ensuring high levels of client satisfaction during delivery Skills Must have strong hands-on technical Skills in PythonPyspark Azure Databricks Spark ETL Cloud Azure preferred Good to have knowledge on ADF CI CD more »