relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Parquet, Iceberg, Hudi) - Experience developing software and data engineering code in one or more programming languages (Java, Python, PySpark, Node, etc) - AWS and other Data and AI aligned Certifications PREFERRED QUALIFICATIONS - Ability to think strategically about business, product, and technical challenges in an enterprise environment - Hands on experience More ❯
in Azure (Data Factory and SQL) Building workflows in SQL, Spark and DBT Data and dimensional modelling Skills & Qualifications Azure Data factory, Synapse and SSIS Python/Park/PySpark Ideally Snowflake and DBT More ❯
processes using AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player What More ❯
City of London, London, United Kingdom Hybrid / WFH Options
La Fosse
processes using AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player What More ❯
South East London, England, United Kingdom Hybrid / WFH Options
La Fosse
processes using AWS, Snowflake, etc. Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong communicator and team player What More ❯
months Location: London JOB DETAILS Role Title: Senior Data Engineer Note: (Please do not submit the same profiles as for 111721-1) Required Core Skills: Databricks, AWS, Python, Pyspark, data modelling Minimum years of experience: 7 years Job Description: Must have hands-on experience in designing, developing, and maintaining data pipelines and data streams. Must have a strong working … knowledge of moving/transforming data across layers (Bronze, Silver, Gold) using ADF, Python, and PySpark. Must have hands-on experience with PySpark, Python, AWS, data modelling. Must have experience in ETL processes. Must have hands-on experience in Databricks development. Good to have experience in developing and maintaining data integrity and accuracy, data governance, and data security policies More ❯
Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). Exposure to financial data or market analytics projects. Experience with data engineering tools (DuckDB, PySpark, etc.). Knowledge of automated testing frameworks (Playwright, Cypress). Experience of WebAssembly. Python programming experience for data manipulation or API development. Use of AI for creating visualisations. Soft More ❯
Leverage Azure services extensively, particularly Azure Storage, for scalable cloud solutions. Ensure seamless integration with AWS S3 and implement secure data encryption/decryption practices. Python Implementation: Utilize Python, Pyspark for processing large datasets and integrating with cloud-based data solutions. Team Leadership: Manage and mentor a team of 3 engineers, fostering best practices in software development and code … and optimize workflows, ensuring efficient and reliable operations. Required 5-7 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
technologies, particularly Azure, this role represents a great next step in your data engineering career. The successful candidate will possess the following essential skills: Strong proficiency in Python or PySpark Data Engineering experience, ideally with an Azure background Significant experience with SQL (preferably SQL Server) Excellent communication skills capable of interacting with stakeholders of varying seniority It would be More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Method-Resourcing
processes to transform and surface data for reporting and analytics Ensuring data quality and integrity through validation and cleansing techniques What We're Looking For: Strong proficiency in SQL , Pyspark and Python Hands-on experience with Microsoft Fabric Previous experience in a collaborative, data-driven team Bonus Points For: Experience working with D365 F&O Background in ecommerce or More ❯
Nuneaton, England, United Kingdom Hybrid / WFH Options
Hays
We're looking for someone with strong technical expertise and a passion for solving complex business problems. You'll bring: Strong experience with SQL, SQL Server DB, Python, and PySpark Proficiency in Azure Data Factory, Databricks is a must, and Cloudsmith Background in data warehousing and data engineering Solid project management capabilities Outstanding communication skills, translating technical concepts into More ❯
Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Hays
We're looking for someone with strong technical expertise and a passion for solving complex business problems. You'll bring: Strong experience with SQL, SQL Server DB, Python, and PySpark Proficiency in Azure Data Factory, Databricks is a must, and Cloudsmith Background in data warehousing and data engineering Solid project management capabilities Outstanding communication skills, translating technical concepts into More ❯
experience with SSIS, SSRS, SSAS, SSMS Data warehousing, ETL processes, best practice data management Azure cloud technologies (Synapse, Databricks, Data Factory, Power BI) Microsoft Fabric (not essential) Python/PySpark Proven ability to work in hybrid data environments Experience in finance or working with finance teams Ability to manage and lead on and offshore teams Exceptional stakeholder management and More ❯
time to reflect changing business requirements. Preferred Skills and Experience Databricks Azure Data Factory Data Lakehouse Medallion architecture Microsoft Azure T-SQL Development (MS SQL Server 2005 onwards) Python, PySpark Experience of the following systems would also be advantageous: Azure DevOps MDS Kimball Dimensional Modelling Methodology Power Bi Unity Catalogue Microsoft Fabric Experience of the following business areas would More ❯
and optimize workflows, ensuring efficient and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
and optimize workflows, ensuring efficient and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
and optimize workflows, ensuring efficient and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security, including encryption/decryption. Proven More ❯
Job summary We are seeking a skilled and motivated individual to join our Business Intelligence Team at Imperial College Healthcare. In this role, you will be responsible for developing information systems to support the Medical Director's Office, with a More ❯
inference, or revenue optimisation Experience with NLP, image processing, information retrieval, and deep learning models Experience with experiment design and conducting A/B tests Experience with Databricks and PySpark Experience working with AWS or another cloud platform (GCP/Azure) Additional Information Health + Mental Wellbeing PMI and cash plan healthcare access with Bupa Subsidised counselling and coaching More ❯
Overview: 3 contract data engineers to supplement existing team during implementation phase of new data platform. Main Duties and Responsibilities: Write clean and testable code using PySpark and SparkSQL scripting languages, to enable our customer data products and business applications. Build and manage data pipelines and notebooks, deploying code in a structured, trackable and safe manner. Effectively create, optimise … framework at Chambers. Skills and Experience: Excellent understanding of Data Lakehouse architecture built on ADLS. Excellent understanding of data pipeline architectures using ADF and Databricks. Excellent coding skills in PySpark and SQL. Excellent technical governance experience such as version control and CI/CD. Strong understanding of designing, constructing, administering, and maintaining data warehouses and data lakes. Excellent oral More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
MAG (Airports Group)
Press Tab to Move to Skip to Content Link Select how often (in days) to receive an alert: For airports, for partners, for people. We are CAVU. At CAVU our purpose is to find new and better ways to make More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
s/PhD in Computer Science, Data Science, Mathematics, or related field. 5+ years of experience in ML modeling, ranking, or recommendation systems . Proficiency in Python, SQL, Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Fusion People
hybrid) Salary: Competitive + 28% pension contributions Job type: Permanent/full-time or 6-12 month contract (both options available) Essential Experience: Strong Python programming knowledge, ideally with PySpark Knowledge of the Azure Databricks platform and its functionalities Adaptable with a willingness to work flexibly as organizational needs evolve Ability to work well within a team and collaborate More ❯
pension contributions Job type: Permanent/fulltime or 6 - 12 month contract (both option available) Summary of any essential experience required for the role Strong Python programming knowledge, ideally Pyspark Knowledge of the Azure Databricks platform and associated functionalities Adaptable, with a willingness to work flexibly as the needs of the organisation evolve. Working well within a team, and More ❯
practices. This is a fantastic opportunity for a curious, solutions-focused data scientist to help build out our capability, working with cutting-edge tools like Databricks, AWS data services, PySpark, and CI/CD pipelines. What's in it for you? You'll be joining a collaborative, supportive team with a real passion for data-led innovation. It's … business impact - we'd love to hear from you. About you: 2-5 years of experience in Data Science or a related field Strong programming skills in Python and PySpark Strong data science modelling skills across classification, regression, forecasting, and/or NLP Analytical mindset with the ability to present insights to both technical and non-technical audiences Experience More ❯